|
{ |
|
"paper_id": "E14-1002", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:40:19.532926Z" |
|
}, |
|
"title": "Undirected Machine Translation with Discriminative Reinforcement Learning", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Gesmundo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a novel Undirected Machine Translation model of Hierarchical MT that is not constrained to the standard bottomup inference order. Removing the ordering constraint makes it possible to condition on top-down structure and surrounding context. This allows the introduction of a new class of contextual features that are not constrained to condition only on the bottom-up context. The model builds translation-derivations efficiently in a greedy fashion. It is trained to learn to choose jointly the best action and the best inference order. Experiments show that the decoding time is halved and forestrescoring is 6 times faster, while reaching accuracy not significantly different from state of the art.", |
|
"pdf_parse": { |
|
"paper_id": "E14-1002", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a novel Undirected Machine Translation model of Hierarchical MT that is not constrained to the standard bottomup inference order. Removing the ordering constraint makes it possible to condition on top-down structure and surrounding context. This allows the introduction of a new class of contextual features that are not constrained to condition only on the bottom-up context. The model builds translation-derivations efficiently in a greedy fashion. It is trained to learn to choose jointly the best action and the best inference order. Experiments show that the decoding time is halved and forestrescoring is 6 times faster, while reaching accuracy not significantly different from state of the art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Machine Translation (MT) can be addressed as a structured prediction task (Brown et al., 1993; Yamada and Knight, 2001; Koehn et al., 2003) . MT's goal is to learn a mapping function, f , from an input sentence, x, into y = (t, h), where t is the sentence translated into the target language, and h is the hidden correspondence structure . In Hierarchical MT (HMT) (Chiang, 2005 ) the hidden correspondence structure is the synchronous-tree composed by instantiations of synchronous rules from the input grammar, G.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "(Brown et al., 1993;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 95, |
|
"end": 119, |
|
"text": "Yamada and Knight, 2001;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 139, |
|
"text": "Koehn et al., 2003)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 378, |
|
"text": "(Chiang, 2005", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistical models usually define f as: f (x) = arg max y\u2208Y Score(x, y), where Score(x, y) is a function whose parameters can be learned with a specialized learning algorithm. In MT applications, it is not possible to enumerate all y \u2208 Y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "HMT decoding applies pruning (e.g. Cube Pruning (Huang and Chiang, 2005) ), but even then HMT has higher complexity than Phrase Based MT (PbMT) (Koehn et al., 2003) . On the other hand, HMT improves over PbMT by introducing the possibility of exploiting a more sophisticated reordering model not bounded by a window size, and producing translations with higher syntacticsemantic quality. In this paper, we present the Undirected Machine Translation (UMT) framework, which retains the advantages of HMT and allows the use of a greedy decoder whose complexity is lower than standard quadratic beamsearch PbMT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 72, |
|
"text": "(Huang and Chiang, 2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "UMT's fast decoding is made possible through even stronger pruning: the decoder chooses a single action at each step, never retracts that action, and prunes all incompatible alternatives to that action. If this extreme level of pruning was applied to the CKY-like beam-decoding used in standard HMT, translation quality would be severely degraded. This is because the bottom-up inference order imposed by CKY-like beam-decoding means that all pruning decisions must be based on a bottom-up approximation of contextual features, which leads to search errors that affect the quality of reordering and lexical-choice (Gesmundo and Henderson, 2011) . UMT solves this problem by removing the bottom-up inference order constraint, allowing many different inference orders for the same tree structure, and learning the inference order where the decoder can be the most confident in its pruning decisions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 614, |
|
"end": 644, |
|
"text": "(Gesmundo and Henderson, 2011)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Removing the bottom-up inference order constraint makes it possible to condition on top-down structure and surrounding context. This undirected approach allows us to integrate contextual features such as the Language Model (LM) in a more flex-ible way. It also allows us to introduce a new class of undirected features. In particular, we introduce the Context-Free Factor (CFF) features. CFF features compute exactly and efficiently a bound on the context-free cost of a partial derivation's missing branches, thereby estimating the future cost of partial derivations. The new class of undirected features is fundamental for the success of a greedy approach to HMT, because the additional nonbottom-up context is sometimes crucial to have the necessary information to make greedy decisions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Because UMT prunes all but the single chosen action at each step, both choosing a good inference order and choosing a correct action reduce to a single choice of what action to take next. To learn this decoding policy, we propose a novel Discriminative Reinforcement Learning (DRL) framework. DRL is used to train models that construct incrementally structured output using a local discriminative function, with the goal of optimizing a global loss function. We apply DRL to learn the UMT scoring function's parameters, using the BLEU score as the global loss function. DRL learns a weight vector for a linear classifier that discriminates between decisions based on which one leads to a complete translation-derivation with a better BLEU score. Promotions/demotions of translations are performed by applying a Perceptron-style update on the sequence of decisions that produced the translation, thereby training local decisions to optimize the global BLEU score of the final translation, while keeping the efficiency and simplicity of the Perceptron Algorithm (Rosenblatt, 1958; Collins, 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1060, |
|
"end": 1078, |
|
"text": "(Rosenblatt, 1958;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1079, |
|
"end": 1093, |
|
"text": "Collins, 2002)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our experiments show that UMT with DRL reduces decoding time by over half, and the time to rescore translations with the Language Model by 6 times, while reaching accuracy non-significantly different from the state of the art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we present the UMT framework. For ease of presentation, and following synchronous-grammar based MT practice, we will henceforth restrict our focus to binary grammars (Zhang et al., 2006; Wang et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 203, |
|
"text": "(Zhang et al., 2006;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 222, |
|
"text": "Wang et al., 2007)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A UMT decoder can be formulated as a function, f , that maps a source sentence, x \u2208 X , into a structure defined by y = (t, h) \u2208 Y, where t is the translation in the target language, and h is the synchronous tree structure generating the input sentence on the source side and its translation on the target side. Synchronous-trees are composed of instantiations of synchronous-rules, r, from a grammar, G. A UMT decoder builds synchronous-trees, h, by recursively expanding partial synchronous-trees, \u03c4 . \u03c4 includes a partial translation. Each \u03c4 is required to be a connected sub-graph of some synchronous-tree h. Thus, \u03c4 is composed of a subset of the rules from any h that generates x on the source side, such that there is a connected path between any two rules in \u03c4 . Differently from the partial structures built by a bottom-up decoder, \u03c4 does not have to cover a contiguous span on x. Formally, \u03c4 is defined by: 1) The set of synchronous-rule instantiations in \u03c4 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "I \u2261 {r 1 , r 2 , \u2022 \u2022 \u2022 , r k |r i \u2208 G, 1 \u2264 i \u2264 k};", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2) The set of connections among the synchronousrule instantiations, C. Let c i = (r i , r j i ) be the notation to represent the connection between the i-th rule and the rule r j i . The set of connections can be expressed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "C \u2261 {(r 1 , r j 1 ), (r 2 , r j 2 ), \u2022 \u2022 \u2022 , (r k\u22121 , r j k\u22121 )}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3) The postcondition set, P , which specifies the non-terminals in \u03c4 that are available for creating new connections. Each postcondition, p i = (r x , X y ) i , indicates that the rule r x has the non-terminal X y available for connections. The index y identifies the non-terminal in the rule. In a binary grammar y can take only 3 values: 1 for the first non-terminal (the left child of the source side), 2 for the second non-terminal, and h for the head. The postcondition set can be expressed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "P\u2261{(r x 1 , X y 1 ) 1 , \u2022 \u2022 \u2022 , (r xm , X ym ) m } 4)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The set of carries, K. We define a different carry, \u03ba i , for each non-terminal available for connections. Each carry stores the extra information required to correctly score the non-local interactions between \u03c4 and the rule that will be connected at that non-terminal. Thus |K| = |P |. Let \u03ba i be the carry associated with the postcondition p i . The set of carries can be expressed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "K \u2261 {\u03ba 1 , \u03ba 2 , \u2022 \u2022 \u2022 , \u03ba m }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Partial synchronous-trees, \u03c4 , are expanded by performing connection-actions. Given a \u03c4 we can connect to it a new rule,r, using one available nonterminal represented by postcondition, p i \u2208 P , and obtain a new partial synchronous-tree\u03c4 . Formally:\u03c4 \u2261 \u03c4 \u22d6\u00e2 , where,\u00e2 = [r, p i ], represents the connection-action. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "[r, p i ] \u2190 PopBestAction (Q,w); 6: \u03c4 \u2190 CreateConnection(\u03c4,r, p i ); 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "UpdateQueue(Q,r, p i ); 8: end while 9: Return(\u03c4 );", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "10: procedure CreateConnection(\u03c4 ,r, p i ) :\u03c4 11:\u03c4 .I \u2190 \u03c4.I +r; 12:\u03c4 .C \u2190 \u03c4.C + (r, r p i ); 13:\u03c4 .P \u2190 \u03c4.P \u2212 p i ; 14:\u03c4 .K \u2190 \u03c4.K \u2212 \u03ba i ; 15:\u03c4 .K.UpdateCarries(r, p i ); 16:\u03c4 .P .AddAvailableConnectionsFrom(r, p i ); 17:\u03c4 .K.AddCarriesForNewConnections(r, p i ); 18: Return(\u03c4 ); 19: procedure UpdateQueue( Q,r, p i ) : 20: Q.RemoveActionsWith(p i ); 21: Q.AddNewActions(r, p i );", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Machine Translation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Algorithm 1 gives details of the UMT decoding algorithm. The decoder takes as input the source sentence, x, the parameters of the scoring function, w, and the synchronous-grammar, G. At line 2 the partial synchronous-tree \u03c4 is initialized by setting I, C, P and K to empty sets \u2205. At line 3 the queue of candidate connection-actions is initialized as Q \u2261 { [r leaf , null] | r leaf is a leaf rule}, where null means that there is no postcondition specified, since the first rule does not need to connect to anything. A leaf rule r leaf is any synchronous rule with only terminals on the right-hand sides. At line 4 the main loop starts. Each iteration of the main loop will expand \u03c4 using one connection-action. The loop ends when Q is empty, implying that \u03c4 covers the full sentence and has no more missing branches or parents. The best scoring action according to the parameter vector w is popped from the queue at line 5. The scoring of connection-actions is discussed in details in Section 3.2. At line 6 the selected connection-action is used to expand \u03c4 . At line 7 the queue of candidates is updated accordingly (see lines 19-21). At line 8 the decoder it-erates the main loop, until \u03c4 is complete and is returned at line 9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Lines 10-18 describe the CreateConnection(\u2022) procedure, that connects the partial synchronoustree \u03c4 to the selected ruler via the postcondition p i specified by the candidate-action selected in line 5. This procedure returns the resulting partial synchronous-tree:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "\u03c4 \u2261 \u03c4 \u22d6 [r, p i ] .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "At line 11,r is added to the rule set I. At line 12 the connection betweenr and r p i (the rule specified in the postcondition) is added to the set of connections C. At line 13, p i is removed from P . At line 14 the carry k i matching with p i is removed from K. At line 15 the set of carries K is updated, in order to update those carries that need to provide information about the new action. At line 16 new postconditions representing the non-terminals inr that are available for subsequent connections are added in P . At line 17 the carries associated with these new postconditions are computed and added to K. Finally at line 18 the updated partial synchronous-tree is returned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In the very first iteration, the CreateConnection(\u2022) procedure has nothing to compute for some lines. Line 11 is not executed since the first leaf rule needs no connection and has nothing to connect to. lines 12-13 are not executed since P and K are \u2205 and p i is not specified for the first action. Line 15 is not executed since there are no carries to be updated. Lines 16-17 only add the postcondition and carry relative to the leaf rule head link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The procedure used to update Q is reported in lines 19-21. At line 20 all the connection-actions involving the expansion of p i are removed from Q. These actions are the incompatible alternatives to the selected action. In the very first iteration, all actions in Q are removed because they are all incompatible with the connected-graph constraint. At line 21 new connection-actions are added to Q. These are the candidate actions proposing a connection to the available non-terminals of the selected action's new ruler. The rules used for these new candidate-actions must not be in conflict with the current structure of \u03c4 (e.g. the rule cannot generate a source side terminal that is already covered by \u03c4 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Decoding Algorithm", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Training a UMT model simply means training the parameter vector w that is used to choose the best scoring action during decoding. We propose a novel method to apply a kind of minimum error rate training (MERT) to w. Because each action choice must be evaluated in the context of the complete translation-derivation, we formalize this method in terms of Reinforcement Learning. We propose Discriminative Reinforcement Learning as an appropriate way to train a UMT model to maximize the BLEU score of the complete derivation. First we define DRL as a novel generic training framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discriminative Reinforcement Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "RL can be applied to any task, T , that can be formalized in terms of: 1) The set of states S 1 ; 2) A set of actions A s for each state s \u2208 S;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3) The transition function T : S \u00d7 A s \u2192 S, that specifies the next state given a source state and performed action 2 ; 4) The reward function, R :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "S \u00d7 A s \u2192 R; 5) The discount factor, \u03b3 \u2208 [0, 1].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A policy is defined as any map \u03c0 : S \u2192 A. Its value function is given by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "V \u03c0 (s 0 ) = \u03c3 i=0 \u03b3 i R(s i , \u03c0(s i ))", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "path(s 0 |\u03c0) \u2261 s 0 , s 1 , \u2022 \u2022 \u2022 , s \u03c3 |\u03c0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is the sequence of states determined by following policy \u03c0 starting at state s 0 . The Q-function is the total future reward of performing action a 0 in state s 0 and then following policy \u03c0:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Q \u03c0 (s 0 , a 0 ) = R(s 0 , a 0 ) + \u03b3V \u03c0 (s 1 )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Standard RL algorithms search for a policy that maximizes the given reward. Because we are taking a discriminative approach to learn w, we formalize our optimization task similarly to an inverse reinforcement learning problem (Ng and Russell, 2000): we are given information about the optimal action sequence and we want to learn a discriminative reward function. As in other discriminative approaches, this ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "s \u2190SampleState(S); 4:\u00e2 \u2190 \u03c0 w (s); 5: a \u2032 \u2190SampleAction(A s ); 6: if Q \u03c0w (s,\u00e2) < Q \u03c0w (s, a \u2032 ) in D then 7: w \u2190 w + \u03a6 w (s, a \u2032 ) \u2212 \u03a6 w (s,\u00e2); 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "end if 9: until convergence 10: Return(w); approach simplifies the task of learning the reward function in two respects: the learned reward function only needs to be monotonically related to the true reward function, and this property only needs to hold for the best competing alternatives. This is all we need in order to use the discriminative reward function in an optimal classifier, and this simplification makes learning easier in cases where the true reward function is too complicated to model directly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In RL, an optimal policy \u03c0 * is one which, at each state s, chooses the action which maximizes the future reward Q \u03c0 * (s, a). We assume that the future discriminative reward can be approximated with a linear functionQ \u03c0 (s, a) in some featurevector representation \u03c6 : S \u00d7 A s \u2192 R d that maps a state-action pair to a d-dimensional features vector:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Q \u03c0 (s, a) = w \u03c6(s, a)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where w \u2208 R d . This gives us the following policy:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c0 w (s) = arg max a\u2208As w \u03c6(s, a)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The set of parameters of this policy is the vector w. With this formalization, all we need to learn is a vector w such that the resulting decisions are compatible with the given information about the optimal action sequence. We propose a Perceptron-like algorithm to learn these parameters. Algorithm 2 describes the DRL meta-algorithm. The Trainer takes as input \u03c6, the task T , and a generic set of data D describing the behaviors we want to learn. The output is the weight vector w of the learned policy that fits the data D. The algorithm consists in a single training loop that is repeated until convergence (lines 2-9). At line 3 a state, s, is sampled from S. At line 4,\u00e2 is set to be the action that would be preferred by the current w-policy. At line 5 an action, a \u2032 , is sampled from A s such that a \u2032 =\u00e2. At line 6 the algorithm checks if preferring path(T (s,\u00e2), \u03c0 w ) over path(T (s, a \u2032 ), \u03c0 w ) is a correct choice according to the behaviors data D that the algorithm aims to learn. If the current w-policy contradicts D, line 7 is executed to update the weight vector to promote \u03a6 w (s, a \u2032 ) and penalize \u03a6 w (s,\u00e2), where \u03a6 w (s, a) is the summation of the features vectors of the entire derivation path starting at (s, a) and following policy \u03c0 w . This way of updating w has the effect of increasing theQ(\u2022) value associated with all the actions in the sequence that generated the promoted structure, and reducing theQ(\u2022) value of the actions in the sequence that generated the penalized structure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1144, |
|
"end": 1150, |
|
"text": "(s, a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We have described the DRL meta-algorithm to be as general as possible. When applied to a specific problem, more details can be specified: 1) it is possible to choose specific sampling techniques to implement lines 3 and 5; 2) the test at line 6 needs to be detailed according to the nature of T and D; 3) the update statement at line 7 can be replaced with a more sophisticated update approach. We address these issues and describe a range of alternatives as we apply DRL to UMT in Section 3.2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generic Framework of DRL", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To apply DRL we formalize the task of translating x with UMT as T \u2261 {S, {A s }, T, R, \u03b3}: 1) The set of states S is the space of all possible UMT partial synchronous-trees, \u03c4 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2) The set A \u03c4,x is the set of connection-actions that can expand \u03c4 connecting new synchronousrule instantiations matching the input sentence x on the source side;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3) The transition function T is the connection function\u03c4 \u2261 \u03c4 \u22d6 a formalized in Section 2 and detailed by the procedure CreateConnection(\u2022) in Algorithm 1; 4) The true reward function R is the BLEU score. BLEU is a loss function that quantifies the difference between the reference translation and the output translation t. The BLEU score can be computed only when a terminal state is reached and a full translation is available. Thus, the rewards are all zero except at terminal states, called a Pure De-layed Reward function; 5) Considering the nature of the problem and reward function, we choose an undiscounted setting: \u03b3 = 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Next we specify the details of the DRL algorithm. The data D consists of a set of pairs of sentences, D \u2261 {(x, t * )}, where x is the source sentence and t * is the reference translation. The feature-vector representation function \u03c6 maps a pair (\u03c4, a) to a real valued vector having any number of dimensions. Each dimension corresponds to a distinct feature function that maps: {\u03c4 } \u00d7 A \u03c4,x \u2192 R. Details of the features functions implemented for our model are given in Section 4. Each loop of the DRL algorithm analyzes a single sample (x, t * ) \u2208 D. The state s is sampled from a uniform distribution over", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "s 0 , s 1 , \u2022 \u2022 \u2022 , s \u03c3 |\u03c0 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The action a \u2032 is sampled from a Zipfian distribution over {A \u03c4,x \u2212\u00e2} sorted with theQ \u03c0w (s, a) function. In this way actions with higher score have higher probability to be drawn, while actions at the bottom of the rank still have a small probability to be selected. The if at line 6 tests if the translation produced by path(T (s, a \u2032 ), \u03c0 w ) has higher BLEU score than the one produced by path(T (s,\u00e2), \u03c0 w ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the update statement at line 7 we use the Averaged Perceptron technique (Freund and Schapire, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 103, |
|
"text": "(Freund and Schapire, 1999)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 2 can be easily adapted to implement the efficient Averaged Perceptron updates (e.g. see Section 2.1.1 of ). In preliminary experiments, we found that other more aggressive update technique, such as Passive-Aggressive (Crammer et al., 2006) , Aggressive (Shen et al., 2007) , or MIRA (Crammer and Singer, 2003) , lead to worst accuracy. To see why this might be, consider that a MT decoder needs to learn to construct structures (t, h), while the training data specifies the gold translation t * but gives no information on the hidden-correspondence structure h. As discussed in , there are output structures that match the reference translation using a wrong internal structure (e.g. assuming wrong internal alignment). While in other cases the output translation can be a valid alternative translation but gets a low BLEU score because it differs from t * . Aggressively promoting/penalizing structures whose correctness can be only partially verified can be expected to harm generalization ability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 228, |
|
"end": 250, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 283, |
|
"text": "(Shen et al., 2007)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 320, |
|
"text": "(Crammer and Singer, 2003)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Application of DRL to UMT", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this section we show how the features designed for bottom-up HMT can be adapted to the undirected approach, and we introduce a new feature from the class of undirected features that are made possible by the undirected approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Local features depend only on the action rule r. These features can be used in the undirected approach without adaptation, since they are independent of the surrounding structure. For our experiments we use a standard set of local features: the probability of the source phrase given the target phrase; the lexical translation probabilities of the source words given the target words; the lexical translation probabilities of the target words given the source words; and the Word Penalty feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Contextual features are dependent on the interaction between the action rule r and the available context. In UMT all the needed information about the available context is stored in the carry \u03ba i . Therefore, the computation of contextual features whose carry's size is bounded (like the LM) requires constant time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The undirected adaptation of the LM feature computes the scores of the new n-grams formed by adding the terminals of the action rule r to the current partial translation \u03c4 . In the case that the action rule r is connected to \u03c4 via a child nonterminal, the carry is expressed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03ba i \u2261 ([W L \u22c6 W R ])", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ". Where W L and W R are respectively the left and right boundary target words of the span covered by \u03c4 . This notation is analogous to the standard star notation used for the bottom-up decoder (e.g. (Chiang, 2007) Section 5.3.2). In the case that r is connected to \u03c4 via the head non-terminal, the carry is expressed as", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 213, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u03ba i \u2261 (W R ]-[W L ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Where W L and W R are respectively the left and right boundary target words of the surrounding context provided by \u03c4 . The boundary words stored in the carry and the terminals of the action rule are all the information needed to compute and score the new n-grams generated by the connection-action.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In addition, we introduce the Context-Free Factor (CFF) features. An action rule r is connected to \u03c4 via one of r's non-terminals, X r,\u03c4 . Thus, the score of the interaction between r and the context structure attached to X r,\u03c4 can be computed exactly, while the score of the structures attached to other r nonterminals (i.e. those in postconditions) cannot be computed since these branches are missing. Each of these postcondition nonterminals has an associated CFF feature, which is an upper bound on the score of its missing branch. More precisely, it is an upper bound on the context-free component of this score. This upper bound can be exactly and efficiently computed using the Forest Rescoring Framework (Huang and Chiang, 2007; Huang, 2008) . This framework separates the MT decoding in two steps. In the first step only the context-free factors are considered. The output of the first step is a hypergraph called the contextfree-forest, which compactly represents an exponential number of synchronous-trees. The second step introduces contextual features by applying a process of state-splitting to the context-free-forest, rescoring with non-context-free factors, and efficiently pruning the search space.", |
|
"cite_spans": [ |
|
{ |
|
"start": 712, |
|
"end": 736, |
|
"text": "(Huang and Chiang, 2007;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 749, |
|
"text": "Huang, 2008)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To efficiently compute CFF features we run the Inside-Outside algorithm with the (max, +) semiring (Goodman, 1999) over the context-freeforest. The result is a map that gives the maximum Inside and Outside scores for each node in the context-free forest. This map is used to get the value of the CFF features in constant time while running the forest rescoring step.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 114, |
|
"text": "(Goodman, 1999)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Undirected Features", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We implement our model on top of Cdec (Dyer et al., 2010) . Cdec provides a standard implementation of the HMT decoder (Chiang, 2007) and MERT training (Och, 2003) that we use as baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 57, |
|
"text": "(Dyer et al., 2010)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 119, |
|
"end": 133, |
|
"text": "(Chiang, 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 163, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We experiment on the NIST Chinese-English parallel corpus. The training corpus contains 239k sentence pairs with 6.9M Chinese words and 8.9M English words. The test set contains 919 sentence pairs. The hierarchical translation grammar was extracted using the Joshua toolkit (Li et al., 2009) implementation of the suffix array rule extractor algorithm (Callison-Burch et al., 2005; Lopez, 2007) . Table 1 reports the decoding time measures. HMT with beam1 is the fastest possible configuration for HMT, but it is 71.59% slower than UMT. This is because HMT b1 constructs O(n 2 ) subtrees, many of which end up not being used in the final result, whereas UMT only constructs the rule instantiations that are required. HMT with beam30 is the fastest configuration that reaches state of the art accuracy, but increases the average time per sentence by an additional 131.36% when compared with UMT. 1153.5 ms +331.37% the average time spent on the forest rescoring step, which is the only step where the decoders actually differ. This is the step that involves the integration of the Language Model and other contextual features. For HMT b30, rescoring takes two thirds of the total decoding time. Thus rescoring is the most time consuming step in the pipeline. The rescoring time comparison shows even bigger gains for UMT. HMT b30 is almost 6 times slower than UMT. Table 2 reports the training time measures. These results show HMT b30 training is more than 4 times slower than UMT training with DRL. Comparing with Table 1 , we notice that the relative gain on average training time is higher than the gain measured at decoding time. This is because MERT has an higher complexity than DRL. Both of the training algorithms requires 10 training epochs to reach convergence. Table 3 reports the accuracy measures. As expected, accuracy degrades the more aggressively the search space is pruned. UMT trained with DRL loses 2.0 BLEU points compared to HMT b30. This corresponds to a relative-loss of 6.33%. Although not inconsequential, this variation is not considered big (e.g. at the WMT-11 Machine Translation shared task (Callison-Burch et al., 2011) ). To measure the significance of the variation, we compute the sign test and measure the one-tail p-value for the presented models in comparison to HMT b30. From the values re-ported in the fourth column, we can observe that the BLEU score variations would not normally be considered significant. For example, at WMT-11 two systems were considered equivalent if p > 0.1, as in these cases. The accuracy cannot be compared in terms of search score since the models we are comparing are trained with distinct algorithms and thus the search scores are not comparable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 291, |
|
"text": "(Li et al., 2009)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 381, |
|
"text": "(Callison-Burch et al., 2005;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 394, |
|
"text": "Lopez, 2007)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 2121, |
|
"end": 2150, |
|
"text": "(Callison-Burch et al., 2011)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 404, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1364, |
|
"end": 1371, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1515, |
|
"end": 1522, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1772, |
|
"end": 1779, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To test the impact of the CFF features, we trained and tested UMT with DRL with and without these features. This resulted in an accuracy decrease of 2.3 BLEU points. Thus these features are important for the success of the greedy approach. They provide an estimate of the score of the missing branches, thus helping to avoid some actions that have a good local score but lead to final translations with low global score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To validate the results, additional experiments were executed on the French to Italian portion of the Europarl corpus v6. This portion contains 190k pairs of sentences. The first 186k sentences were used to extract the grammar and train the two models. The final tests were performed on the remaining 4k sentence pairs. With this corpus we measured a similar speed gain. HMT b30 is 2.3 times slower at decoding compared to UMT, and 6.1 times slower at rescoring, while UMT loses 1.1 BLEU points in accuracy. But again the accuracy differences are not considered significant. We measured a p-value of 0.25, which is not significant at the 0.1 level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Models sharing similar intuitions have been previously applied to other structure prediction tasks. For example, Nivre et al. (2006) presents a linear time syntactic dependency parser, which is constrained in a left-to-right decoding order. This model offers a different accuracy/complexity balance than the quadratic time graph-based parser of Mcdonald et al. (2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 132, |
|
"text": "Nivre et al. (2006)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 345, |
|
"end": 367, |
|
"text": "Mcdonald et al. (2005)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Other approaches learning a model specifically for greedy decoding have been applied with suc-cess to other less complex tasks. Shen et al. (2007) present the Guided Learning (GL) framework for bidirectional sequence classification. GL successfully combines the tasks of learning the order of inference and training the local classifier in a single Perceptron-like algorithm, reaching state of the art accuracy with complexity lower than the exhaustive counterpart (Collins, 2002) . Goldberg and Elhadad (2010) present a similar training approach for a Dependency Parser that builds the tree-structure by recursively creating the easiest arc in a non-directional manner. This model also integrates the tasks of learning the order of inference and training the parser in a single Perceptron. By \"non-directional\" they mean the removal of the constraint of scanning the sentence from left to right, which is typical of shift-reduce models. However this algorithm still builds the tree structures in a bottom-up fashion. This model has a O(n log n) decoding complexity and accuracy performance close to the O(n 2 ) graph-based parsers (Mcdonald et al., 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 146, |
|
"text": "Shen et al. (2007)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 480, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 510, |
|
"text": "Goldberg and Elhadad (2010)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1132, |
|
"end": 1155, |
|
"text": "(Mcdonald et al., 2005)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Similarities can be found between DRL and previous work that applies discriminative training to structured prediction: Collins and Roark (2004) present an Incremental Parser trained with the Perceptron algorithm. Their approach is specific to dependency parsing and requires a function to test exact match of tree structures to trigger parameter updates. On the other hand, DRL can be applied to any structured prediction task and can handle any kind of reward function. LASO and SEARN (Daum\u00e9 III et al., 2009; are generic frameworks for discriminative training for structured prediction: LASO requires a function that tests correctness of partial structures to trigger early updates, while SEARN requires an optimal policy to initialize the learning algorithm. Such a test function or optimal policy cannot be computed for tasks such as MT where the hidden correspondence structure h is not provided in the training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 143, |
|
"text": "Collins and Roark (2004)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 510, |
|
"text": "(Daum\u00e9 III et al., 2009;", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In general, we believe that greedy-discriminative solutions are promising for tasks like MT, where there is not a single correct solution: normally there are many correct ways to translate the same sentence, and for each correct translation there are many different derivation-trees generating that translation, and each correct derivation tree can be built greedily following different inference orders. Therefore, the set of correct decoding paths is a reasonable portion of UMT's search space, giving a well-designed greedy algorithm a chance to find a good translation even without beam search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In order to directly evaluate the impact of our proposed decoding strategy, in this paper the only novel features that we consider are the CFF features. But to take full advantage of the power of discriminative training and the lower decoding complexity, it would be possible to vastly increase the number of features. The UMT's undirected nature allows the integration of non-bottom-up contextual features, which cannot be used by standard HMT and PbMT. And the use of a historybased model allows features from an arbitrarily wide context, since the model does not need to be factorized. Exploring the impact of this advantage is left for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The main contribution of this work is the proposal of a new MT model that offers an accuracy/complexity balance that was previously unavailable among the choices of hierarchical models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "We have presented the first Undirected framework for MT. This model combines advantages given by the use of hierarchical synchronousgrammars with a more efficient decoding algorithm. UMT's nature allows us to design novel undirected features that better approximate contextual features (such as the LM), and to introduce a new class of undirected features that cannot be used by standard bottom-up decoders. Furthermore, we generalize the training algorithm into a generic Discriminative Reinforcement Learning meta-algorithm that can be applied to any structured prediction task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "S can be either finite or infinite. 2 For simplicity we describe a deterministic process. To generalize to the stochastic process, replace the transition function with the transition probability: Psa(s \u2032 ), s \u2032 \u2208 S.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Preliminary experiments with updating only the features for\u00e2 and a \u2032 produced substantially worse results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The mathematics of statistical machine translation: parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Vincent", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19:263-311.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Scaling phrase-based statistical machine translation to larger corpora and longer phrases", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josh", |
|
"middle": [], |
|
"last": "Schroeder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL '05: Proceedings of the 43rd Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Callison-Burch, Colin Bannard, and Josh Schroeder. 2005. Scaling phrase-based statisti- cal machine translation to larger corpora and longer phrases. In ACL '05: Proceedings of the 43rd Con- ference of the Association for Computational Lin- guistics, Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Findings of the 2011 workshop on statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christof", |
|
"middle": [], |
|
"last": "Monz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omar", |
|
"middle": [], |
|
"last": "Zaidan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "WMT '11: Proceedings of the 6th Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 work- shop on statistical machine translation. In WMT '11: Proceedings of the 6th Workshop on Statistical Ma- chine Translation, Edinburgh, Scotland.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A hierarchical phrase-based model for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL '05: Proceedings of the 43rd Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In ACL '05: Proceedings of the 43rd Conference of the As- sociation for Computational Linguistics, Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Hierarchical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "2", |
|
"pages": "201--228", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Incremental parsing with the perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL '04: Proceedings of the 42rd Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL '04: Proceedings of the 42rd Conference of the Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "EMNLP '02: Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In EMNLP '02: Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, Philadel- phia, PA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Ultraconservative online algorithms for multiclass problems", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "951--991", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultracon- servative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951-991.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Online passive-aggressive algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shai", |
|
"middle": [], |
|
"last": "Shalev-Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive al- gorithms. Journal of Machine Learning Research, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Learning as search optimization: approximate large margin methods for structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ICML '05: Proceedings of the 22nd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. Learning as search optimization: approximate large margin methods for structured prediction. In ICML '05: Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Search-based structured prediction as classification", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ASLTSP '05: Proceedings of the NIPS Workshop on Advances in Structured Learning for Text and Speech Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2005. Search-based structured prediction as clas- sification. In ASLTSP '05: Proceedings of the NIPS Workshop on Advances in Structured Learn- ing for Text and Speech Processing, Whistler, British Columbia, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Searn in practice", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2006. Searn in practice. Technical report.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Search-based structured prediction. Submitted to", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Machine Learning Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Submit- ted to Machine Learning Journal.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Practical structured learning techniques for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hal Daum\u00e9 III. 2006. Practical structured learning techniques for natural language processing. Ph.D. thesis, University of Southern California.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hendra", |
|
"middle": [], |
|
"last": "Setiawan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ferhan", |
|
"middle": [], |
|
"last": "Ture", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Eidelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL '10: Proceedings of the ACL 2010 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Hendra Setiawan, Ferhan Ture, Vladimir Ei- delman, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In ACL '10: Proceedings of the ACL 2010 System Demonstrations, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Large margin classification using the perceptron algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Freund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Schapire", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "277--296", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Freund and Robert E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277-296.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Heuristic Search for Non-Bottom-Up Tree Structure Prediction", |
|
"authors": [ |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Gesmundo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "EMNLP '11: Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrea Gesmundo and James Henderson. 2011. Heuristic Search for Non-Bottom-Up Tree Structure Prediction. In EMNLP '11: Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, Edinburgh, Scotland, UK.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "An efficient algorithm for easy-first non-directional dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Elhadad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "NAACL '10: Proceedings of the 11th Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Goldberg and Michael Elhadad. 2010. An effi- cient algorithm for easy-first non-directional depen- dency parsing. In NAACL '10: Proceedings of the 11th Conference of the North American Chapter of the Association for Computational Linguistics, Los Angeles, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Semiring parsing. Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Goodman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "573--605", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joshua Goodman. 1999. Semiring parsing. Computa- tional Linguistics, 25:573-605.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Better k-best parsing", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IWPT '05: Proceedings of the 9th International Workshop on Parsing Technology, Vancouver", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In IWPT '05: Proceedings of the 9th Inter- national Workshop on Parsing Technology, Vancou- ver, British Columbia, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Forest rescoring: Faster decoding with integrated language models", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL '07: Proceedings of the 45th Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang and David Chiang. 2007. Forest rescor- ing: Faster decoding with integrated language mod- els. In ACL '07: Proceedings of the 45th Confer- ence of the Association for Computational Linguis- tics, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Forest-based algorithms in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang Huang. 2008. Forest-based algorithms in natu- ral language processing. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "NAACL '03: Proceedings of the 4th Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL '03: Proceedings of the 4th Conference of the North American Chapter of the Association for Computational Linguistics, Edmonton, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Joshua: An open source toolkit for parsing-based machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Zhifei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjeev", |
|
"middle": [], |
|
"last": "Khudanpur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wren", |
|
"middle": [], |
|
"last": "Thornton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omar", |
|
"middle": [], |
|
"last": "Zaidan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "WMT '09: Proceedings of the 4th Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhifei Li, Chris Callison-Burch, Chris Dyer, San- jeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. In WMT '09: Proceedings of the 4th Workshop on Statistical Machine Translation, Athens, Greece.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "An end-to-end discriminative approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "COLING-ACL '06: Proceedings of the 21st International Conference on Computational Linguistics and the 44th Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discrimina- tive approach to machine translation. In COLING- ACL '06: Proceedings of the 21st International Con- ference on Computational Linguistics and the 44th Conference of the Association for Computational Linguistics, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Hierarchical phrase-based translation with suffix arrays", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lopez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "EMNLP-CoNLL '07: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Lopez. 2007. Hierarchical phrase-based trans- lation with suffix arrays. In EMNLP-CoNLL '07: Proceedings of the 2007 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Online large-margin training of dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "ACL '05: Proceedings of the 43rd Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Mcdonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of de- pendency parsers. In ACL '05: Proceedings of the 43rd Conference of the Association for Computa- tional Linguistics, Ann Arbor, MI, USA.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Algorithms for inverse reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "ICML '00: Proceedings of the 17th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Y. Ng and Stuart Russell. 2000. Algorithms for inverse reinforcement learning. In ICML '00: Proceedings of the 17th International Conference on Machine Learning, Stanford University, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Maltparser: A data-driven parser-generator for dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Nilsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "LREC '06: Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for de- pendency parsing. In LREC '06: Proceedings of the 5th International Conference on Language Re- sources and Evaluation, Genoa, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "ACL '03: Proceedings of the 41st Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL '03: Pro- ceedings of the 41st Conference of the Association for Computational Linguistics, Sapporo, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The Perceptron: A probabilistic model for information storage and organization in the brain", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Rosenblatt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1958, |
|
"venue": "Psychological Review", |
|
"volume": "65", |
|
"issue": "6", |
|
"pages": "386--408", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Rosenblatt. 1958. The Perceptron: A proba- bilistic model for information storage and organiza- tion in the brain. Psychological Review, 65(6):386- 408.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Guided learning for bidirectional sequence classification", |
|
"authors": [ |
|
{ |
|
"first": "Libin", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giorgio", |
|
"middle": [], |
|
"last": "Satta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ACL '07: Proceedings of the 45th Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classifi- cation. In ACL '07: Proceedings of the 45th Confer- ence of the Association for Computational Linguis- tics, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Binarizing syntax trees to improve syntax-based machine translation accuracy", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "EMNLP-CoNLL '07: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Wang, Kevin Knight, and Daniel Marcu. 2007. Binarizing syntax trees to improve syntax-based ma- chine translation accuracy. In EMNLP-CoNLL '07: Proceedings of the 2007 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "A syntaxbased statistical translation model", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "ACL '01: Proceedings of the 39th Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In ACL '01: Pro- ceedings of the 39th Conference of the Association for Computational Linguistics, Toulouse, France.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Synchronous binarization for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "NAACL '06: Proceedings of the 7th Conference of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for ma- chine translation. In NAACL '06: Proceedings of the 7th Conference of the North American Chapter of the Association for Computational Linguistics, New York, New York.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF3": { |
|
"text": "Training speed comparison.", |
|
"content": "<table><tr><td>Model</td><td colspan=\"3\">BLEU relative loss p-value</td></tr><tr><td colspan=\"2\">UMT with DRL 30.14</td><td>6.33%</td><td>0.18</td></tr><tr><td>HMT b1</td><td>30.87</td><td>4.07%</td><td>0.21</td></tr><tr><td>HMT b30</td><td>32.18</td><td>-</td><td>-</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Accuracy comparison.", |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |