|
{ |
|
"paper_id": "Q19-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:09:47.102031Z" |
|
}, |
|
"title": "Learning Neural Sequence-to-Sequence Models from Weak Feedback with Bipolar Ramp Loss", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Jehl", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Carolin", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In many machine learning scenarios, supervision by gold labels is not available and consequently neural models cannot be trained directly by maximum likelihood estimation. In a weak supervision scenario, metric-augmented objectives can be employed to assign feedback to model outputs, which can be used to extract a supervision signal for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that objectives should actively discourage negative outputs in addition to promoting a surrogate gold structure. This notion of bipolarity is naturally present in ramp loss objectives, which we adapt to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks.", |
|
"pdf_parse": { |
|
"paper_id": "Q19-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In many machine learning scenarios, supervision by gold labels is not available and consequently neural models cannot be trained directly by maximum likelihood estimation. In a weak supervision scenario, metric-augmented objectives can be employed to assign feedback to model outputs, which can be used to extract a supervision signal for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that objectives should actively discourage negative outputs in addition to promoting a surrogate gold structure. This notion of bipolarity is naturally present in ramp loss objectives, which we adapt to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and minimum risk training on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sequence-to-sequence neural models are standardly trained using a maximum likelihood estimation (MLE) objective. However, MLE training requires full supervision by gold target structures, which in many scenarios are too difficult or expensive to obtain. For example, in semantic parsing for question-answering it is often easier to collect gold answers rather than gold parses * Both authors contributed equally to this publication. (Clarke et al., 2010; Berant et al., 2013; Pasupat and Liang, 2015; Rajpurkar et al., 2016, inter alia) . In machine translation, there are many domains for which no gold references exist, although crosslingual document-level links are present for many multilingual data collections.", |
|
"cite_spans": [ |
|
{ |
|
"start": 433, |
|
"end": 454, |
|
"text": "(Clarke et al., 2010;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 475, |
|
"text": "Berant et al., 2013;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 476, |
|
"end": 500, |
|
"text": "Pasupat and Liang, 2015;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 536, |
|
"text": "Rajpurkar et al., 2016, inter alia)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we investigate methods where a supervision signal for output structures can be extracted from weak feedback. In the following, we use learning from weak feedback, or weakly supervised learning, to refer to a scenario where output structures generated by the model are judged according to an external metric, and this feedback is used to extract a supervision signal that guides the learning process. Metric-augmented sequence-level objectives from reinforcement learning (Williams, 1992; Ranzato et al., 2016) , minimum risk training (MRT) (Smith and Eisner, 2006; Shen et al., 2016) or margin-based structured prediction objectives (Taskar et al., 2005; Edunov et al., 2018) can be seen as instances of such algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 501, |
|
"text": "(Williams, 1992;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 523, |
|
"text": "Ranzato et al., 2016)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 554, |
|
"end": 578, |
|
"text": "(Smith and Eisner, 2006;", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 597, |
|
"text": "Shen et al., 2016)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 647, |
|
"end": 668, |
|
"text": "(Taskar et al., 2005;", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 689, |
|
"text": "Edunov et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In natural language processing applications, such algorithms have mostly been used in combination with full supervision tasks, allowing to compute a feedback score from metrics such as BLEU or F-score that measure the similarity of output structures against gold structures. Our main interest is in weak supervision tasks where the calculation of a feedback score cannot fall back onto gold structures. For example, matching proposed answers to a gold answer can guide a semantic parser towards correct parses, and matching proposed translations against linked documents can guide learning in machine translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In such scenarios the judgments by the external metric may be unreliable and thus unable to select a good update direction. It is our intuition that a more reliable signal can be produced by not just encouraging outputs that are good according to weak positive feedback, but also by actively discouraging bad structures. In this way, a system can more effectively learn what distinguishes good outputs from bad ones. We call an objective that incorporates this idea a bipolar objective. The bipolar idea is naturally captured by the structured ramp loss objective (Chapelle et al., 2009) , especially in the formulation by Gimpel and Smith (2012) and Chiang (2012) , who use ramp loss to separate a hope from a fear output in a linear structured prediction model. We employ several ramp loss objectives for two weak supervision tasks, and adapt them to neural models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 564, |
|
"end": 587, |
|
"text": "(Chapelle et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 623, |
|
"end": 646, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 664, |
|
"text": "Chiang (2012)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "First, we turn to the task of semantic parsing in a setup where only question-answer pairs, but no gold semantic parses, are given. We assume a baseline system has been trained using a small supervised data set of question-parse pairs under the MLE objective. The goal is to improve this system by leveraging a larger data set of questionanswer pairs. During learning, the semantic parser suggests parses for which corresponding answers are retrieved. These answers are then compared to the gold answer and the resulting weak supervision signal guides the semantic parser towards finding correct parses. We can show that a bipolar ramp loss objective can improve upon the baseline by over 12 percentage points in F1 score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Second, we use ramp losses on a machine translation task where only weak supervision in the form of cross-lingual document-level links is available. We assume a translation system has been trained using MLE on out-of-domain data. We then investigate whether documentlevel links can be used as a weak supervision signal to adapt the translation system to the target domain. We formulate ramp loss objectives that incorporate bipolar supervision from relevant and irrelevant documents. We also present a metric that allows us to include bipolar supervision in an MRT objective. Experiments show that bipolar supervision is crucial for obtaining gains over the baseline. Even with this very weak supervision, we are able to achieve an improvement of over 0.4% BLEU over the baseline using a bipolar ramp loss.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Finally, we turn to a fully supervised machine translation task. In supervised learning, MLE training in a fully supervised scenario has also been associated with two issues. First, it can cause exposure bias (Ranzato et al., 2016) because during training the model receives its context from the gold structures of the training data, but at test time the context is drawn from the model distribution instead. Second, the MLE objective is agnostic to the final evaluation metric, causing a loss-evaluation mismatch (Wiseman and Rush, 2016) . Our experiments use a similar setup as Edunov et al. (2018) , who apply structured prediction losses to two fully supervised sequenceto-sequence tasks, but do not consider structured ramp loss objectives. Like our predecessors, we want to understand whether training a pre-trained machine translation model further with a metricinformed sequence-level objective will improve translation performance by alleviating the abovementioned issues. By gauging the potential of applying bipolar ramp loss in a full supervision scenario, we achieve best results for a bipolar ramp loss, improving the baseline by over 0.4% BLEU.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 231, |
|
"text": "(Ranzato et al., 2016)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 514, |
|
"end": 538, |
|
"text": "(Wiseman and Rush, 2016)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 580, |
|
"end": 600, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In sum, we show that bipolar ramp loss is superior to other sequence-level objectives for all investigated tasks, supporting our intuition that a bipolar approach is crucial where strong positive supervision is not available. In addition to adapting the ramp loss objective to weak supervision, our ramp loss objective can also be adapted to operate at the token level, which makes it particularly suitable for neural models as they produce their outputs token by token. A token-level objective also better emulates the behavior of the ramp loss for linear models, which only update the weights of features that differ between hope and fear. Finally, the token-level objective allows us to capture token-level errors in a setup where MLE training is not available. Using this objective, we obtain additional gains on top of the sequence-level ramp loss for weakly supervised tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Training neural models with metric-augmented objectives has been explored for various NLP tasks in supervised and weakly supervised scenarios. MRT for neural models has previously been used for machine translation (Shen et al., 2016) and semantic parsing Guu et al., 2017) . 1 Other objectives based on classical structured prediction losses have been used for both machine translation and summarization (Edunov et al., 2018) , as well as semantic parsing (Iyyer et al., 2017; Misra et al., 2018) . Objectives inspired by REINFORCE have, for example, been applied to machine translation (Ranzato et al., 2016; Norouzi et al., 2016) , semantic parsing Mou et al., 2017; Guu et al., 2017) , and reading comprehension (Choi et al., 2017; Yang et al., 2017) . 2 Misra et al. (2018) are the first to compare several objectives for neural semantic parsing. For semantic parsing, they find that objectives employing structured prediction losses perform best. Edunov et al. (2018) compare different classical structured prediction objectives including MRT on a fully supervised machine translation task. They find MRT to perform best. However, they only obtain larger gains by interpolating MRT with the MLE loss. Neither Misra et al. (2018) nor Edunov et al. (2018) investigate objectives that correspond to the bipolar ramp loss that is central in our work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 233, |
|
"text": "(Shen et al., 2016)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 272, |
|
"text": "Guu et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 425, |
|
"text": "(Edunov et al., 2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 456, |
|
"end": 476, |
|
"text": "(Iyyer et al., 2017;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 496, |
|
"text": "Misra et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 587, |
|
"end": 609, |
|
"text": "(Ranzato et al., 2016;", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 631, |
|
"text": "Norouzi et al., 2016)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 651, |
|
"end": 668, |
|
"text": "Mou et al., 2017;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 686, |
|
"text": "Guu et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 734, |
|
"text": "(Choi et al., 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 735, |
|
"end": 753, |
|
"text": "Yang et al., 2017)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 777, |
|
"text": "2 Misra et al. (2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 952, |
|
"end": 972, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1214, |
|
"end": 1233, |
|
"text": "Misra et al. (2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1238, |
|
"end": 1258, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The ramp loss objective (Chapelle et al., 2009) has been applied to supervised phrase-based machine translation (Gimpel and Smith, 2012; Chiang, 2012) . We adapt these objectives to neural models and adapt them to incorporate bipolar weak supervision, while also introducing a novel token-level ramp loss objective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 47, |
|
"text": "(Chapelle et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 136, |
|
"text": "(Gimpel and Smith, 2012;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 137, |
|
"end": 150, |
|
"text": "Chiang, 2012)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our neural sequence-to-sequence models utilize an encoder-decoder setup (Cho et al., 2014; Sutskever et al., 2014) with an attention mechanism (Bahdanau et al., 2015) . Specifically, we employ the framework NEMATUS (Sennrich et al., 2017) . Given an input sequence x = x 1 , x 2 , . . . x |x| , the probability that a model assigns for an output sequence", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 90, |
|
"text": "(Cho et al., 2014;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 91, |
|
"end": 114, |
|
"text": "Sutskever et al., 2014)", |
|
"ref_id": "BIBREF45" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 166, |
|
"text": "(Bahdanau et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 238, |
|
"text": "(Sennrich et al., 2017)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Sequence-to-Sequence Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "y = y 1 , y 2 , . . . y |y| is given by \u03c0 w (y|x) = |y| j=1 \u03c0 w (y j |y <j , x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Sequence-to-Sequence Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Using beam search, we can obtain a sorted k-best list K(x) of most likely to least likely outputs and we define the most likely output as\u0177 = arg max y\u2208K(x) \u03c0 w (y|x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Neural Sequence-to-Sequence Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Prior to employing metric-augmented objectives, we assume that a model has been pre-trained with a maximum likelihood estimation (MLE) objective. Given inputs x and gold structures\u0233, the parameters of the neural network are updated using Stochastic Gradient Descent (SGD) with minibatches of size M , leading to the following objective:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L MLE = \u2212 1 M M m=1 |\u0233| j=1 log \u03c0 w (\u0233 m,j |\u0233 m,<j , x m ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) Minimum Risk Training (MRT). We compare our ramp loss objectives to MRT (Shen et al., 2016) , which uses an external metric to assign rewards to model outputs. Given an input x, S outputs are sampled from the model distribution and updates are performed based on the following MRT objective:", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 95, |
|
"text": "(Shen et al., 2016)", |
|
"ref_id": "BIBREF43" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L MRT = \u2212 1 M M m=1 1 S S s=1 \u03c0 w (y m,s |x m )\u03b4(y m,s ),", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b4(y m,s ) is the reward returned for y m,s by the external metric, and \u03c0 w (y m,s |x m ) is a distribution over outputs that is normalized over S samples and can be controlled for sharpness by a temperature parameter. 3 Following Shen et al. (2016), we use a baseline term b(x m ) that acts as a control variate for variance reduction of the stochastic gradient (Williams, 1992; Greensmith et al., 2004) and allows negative updates for rewards smaller than the baseline. We compute this term by sampling S outputs from the model distribution such that b(x) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 384, |
|
"text": "(Williams, 1992;", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 409, |
|
"text": "Greensmith et al., 2004)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 562, |
|
"text": "b(x)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "= \u2212 1 S S s =1 \u03b4(y s ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ramp Loss Objectives. Our ramp loss objectives can be formulated as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L RAMP = 1 M M m=1 \u03c0 w (y \u2212 m |x m ) (3) \u2212 1 M M m=1 \u03c0 w (y + m |x m ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where y \u2212 is a fear output that is to be discouraged and y + is a hope output that is to be encouraged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Name Table 1 : Configurations for y + and y \u2212 for semantic parsing. We abbreviate P(x) = K(x) : \u03b4(y) = 1, which is the most likely output in the k-best list K(x) that leads to the correct answer, and N (x) = K(x) : \u03b4(y) = 0, which is the most likely output in the k-best list K(x) that leads to the wrong answer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 12, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "y + y \u2212 RAMP arg max y\u2208P(x) \u03c0 w (y|x) arg max y\u2208N (x) \u03c0 w (y|x) RAMP1\u0177 arg max y\u2208N (x) \u03c0 w (y|x) RAMP2 arg max y\u2208P(x) \u03c0 w (y|x)\u0177", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Intuitively, y \u2212 should be an output which has high probability, but receives a bad reward from the external metric. Analogously, y + should be an output which has high probability and receives a high reward from the external metric. The concrete instantiations of y \u2212 and y + depend on the underlying task and are thus deferred to the respective sections below (see Tables 1, 4, and 7). The RAMP loss defined in equation 3has been introduced as equation 8in Gimpel and Smith (2012) . This loss naturally incorporates a bipolarity principle by including both hope and fear into one objective. An alternative formulation of ramp loss can be given by favoring the current model prediction, that is, setting y + =\u0177, and searching for a fear output. This has been called ''cost-augmented decoding'' and been formalized in equation 6in Gimpel and Smith (2012) . This loss dates back to the ''margin-rescaled hinge loss'' of Taskar et al. (2004) and will be called RAMP1 in the following. The converse approach has been called ''cost-diminished decoding'' and been formalized in equation 7in Gimpel and Smith (2012) . Here the model prediction is penalized by setting y \u2212 =\u0177 and searching for a hope output. This objective has been called ''direct loss'' in Hazan et al. (2010) , and will be called RAMP2 in the following.", |
|
"cite_spans": [ |
|
{ |
|
"start": 459, |
|
"end": 482, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 854, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 919, |
|
"end": 939, |
|
"text": "Taskar et al. (2004)", |
|
"ref_id": "BIBREF47" |
|
}, |
|
{ |
|
"start": 1086, |
|
"end": 1109, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1252, |
|
"end": 1271, |
|
"text": "Hazan et al. (2010)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Finally, we introduce a ramp loss objective that can operate on the token level. To be able to adjust individual tokens, we move to log probabilities, so that the sequence decomposes as a sum over individual tokens and it is possible to ignore tokens while encouraging or discouraging others. This leads to the RAMP-T objective: where \u03c4 + m,j and \u03c4 \u2212 m,j are set to 0, 1 or \u22121 depending on the decision whether the corresponding token y + m,j /y \u2212 m,j should be left untouched, encouraged or discouraged. Concretely, we define:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L RAMP-T = (4) 1 M M m=1 |y \u2212 m | j=1 \u03c4 \u2212 m,j log \u03c0 w (y \u2212 m,j |y m,<j , x m ) \u2212 1 M M m=1 |y + m | j=1 \u03c4 + m,j log \u03c0 w (y + m,j |y m,<j , x m ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03c4 + m,j = 0 if y + m,j \u2208 y \u2212 1 else (5) and \u03c4 \u2212 m,j = 0 if y \u2212 m,j \u2208 y + \u22121 else.", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "With this definition, tokens that appear in both y + and y \u2212 are left untouched, whereas tokens that appear only in the hope output are encouraged, and tokens that appear only in the fear output are discouraged (see Figure 1 for an example). This more fine-grained contrast allows the model to learn what distinguishes a good output from a bad one more effectively. 4", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 216, |
|
"end": 224, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Maximum Likelihood Estimation (MLE).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Ramp Loss Objectives. In semantic parsing for question answering, natural language questions are mapped to machine readable parses. Such a parse, y, can be executed against a database that returns an answer a. This answer a can be compared to the available gold answer\u0101 and the following metric can be defined:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b4(y) = 1 if a =\u0101 0 else.", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For RAMP, y + is defined as the most probable output in the k-best list K(x) that leads to the correct answer, that is, where \u03b4(y) = 1. In contrast, y \u2212 is defined as the most probable output in K(x) that does not lead to the correct answer, namely, where \u03b4(y) = 0. The definitions of y + and y \u2212 for this objective and the related ramp loss objectives can be found in Table 1 . If y + or y \u2212 are found, the parse is cached as a hope or fear output, respectively, for the corresponding input x. If at a later point y + or y \u2212 cannot be found in the current k-best list, then previously cached outputs are accessed instead. Should no cached output exist, the corresponding sample is skipped.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 376, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Experimental Setup. Our experiments are conducted on the NLMAPS V2 corpus (Lawrence and Riezler, 2018) , which is a publicly available corpus 5 for geographical questions that can be answered with the OPENSTREETMAP database. 6 The corpus is a recent extension of its predecessor (Haas and Riezler, 2016) , which has been used in Ko\u010disk\u00fd et al. (2016) or Duong et al. (2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 102, |
|
"text": "(Lawrence and Riezler, 2018)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 226, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 303, |
|
"text": "(Haas and Riezler, 2016)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 350, |
|
"text": "Ko\u010disk\u00fd et al. (2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 373, |
|
"text": "Duong et al. (2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each question, the corpus provides both gold parses and gold answers that can be obtained by executing the parses against the database. We take a random subset of 2,000 question-parse pairs to train an initial model \u03c0 w with the MLE objective. Following Lawrence and Riezler (2018), we take a pre-order traversal of the tree-structured parses to obtain individual tokens. A further 1,843 and 2,000 instances of the corpus are retained for development and test set, respectively. For the remaining 22,766 questions, we assume that no gold parses exist and only gold answers are available. With the gold answers as a guide, the initial model \u03c0 w is further improved using the metricaugmented objectives of Section 3 and the metric defined in equation 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The model has 1,024 hidden units (GRUs) and word embeddings of size 1,000. The optimal learning rate was chosen in preliminary experiments on the development set and is set to 0.1. Gradients are clipped to 1.0 if they exceed a value of 1.0 and the sentence length is capped at 200. In the case of the MRT objectives, we set S = S = 10. For the RAMP objectives the size of the k-best list K is 10. For objectives with minibatches, the size of a minibatch is M = 80 and validation on the development set is performed after every 100 updates. For objectives where updates are performed after each seen input, the validation is run after every 8,000 updates, leading to the same number of seen inputs compared to the objectives with minibatches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For validation and at test time, the most likely parse is obtained after a beam search with a beam of size 12. The obtained parse is executed against the database to retrieve its corresponding answer, which is compared to the available gold answer. We define recall as the percentage of correct answers in the entire set and precision as the percentage of correct answers in the set of non-empty answers. The harmonic mean of recall and precision constitutes the F1 score. The stopping point is determined by the highest F1 score on the development set after 30 validations or 30 days of run time 7 and corresponding results are reported on the test set. To measure statistical significance between models we use an approximate randomization test (Noreen, 1989 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 747, |
|
"end": 760, |
|
"text": "(Noreen, 1989", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Experimental Results. Results using the various ramp loss objectives as well as MRT are shown in Table 2 . MRT outperforms the MLE baseline by about 6 percentage points in F1 score. RAMP1 performs worse than MRT, but can still significantly outperform the baseline by 3.05 points in F1 score. RAMP2 performs better than RAMP1, but outperforms MRT only nominally.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 104, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In contrast to this, by carefully selecting both a hope and fear parse, RAMP achieves a significant further 5.43 points in F1 score over MRT. By incorporating token-level feedback, our novel objective RAMP-T outperforms all other models significantly and beats the baseline by over 12 points in F1 score. Compared with RAMP, RAMP-T can take advantage of the token-level feedback that allows a model to determine which tokens in the hope output are instrumental to obtain a positive reward but are missing in the fear output. Analogously, it is possible to identify which tokens in the fear output lead to an incorrect parse, rather than also punishing the tokens in the fear output which are actually correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "MRT is not naturally a bipolar objective. It can only discourage wrong parses if the baseline is larger than 0. Investigating the value of the baseline for 10,000 instances shows that in 37% of the cases the baseline is 0 (i.e., none of the sampled parses leads to the correct answer). As a result, 37% of the time, wrong parses are ignored rather than discouraged. To explore the importance of always discouraging wrong parses, we introduce the objective MRT NEG: it modifies the feedback for parses with a wrong answer to be \u22121 rather than 0, which resembles the fear output that is discouraged in the RAMP objective. With this change, the MRT objective always behaves in a bipolar manner, irrespective of the baseline's value. As a consequence, MRT NEG can significantly outperform MRT by 2.33 points in F1 score (see Table 3 ). This showcases the importance of utilizing bipolar supervision and it constitutes an important finding compared to previous approaches Misra et al., 2018) , where the feedback is defined to lie in the range of [0, 1]. However, MRT NEG still falls short of RAMP by 3.1 points in F1 score. This could be because of the different batch sizes, as MRT uses a batch size of 1, whereas RAMP employs a batch size of 80. To ensure that the difference between the objectives does not stem from this difference, we run an experiment with RAMP where the batch size is also set to 1 (i.e., RAMP M=1). Crucially, it still significantly outperforms MRT. At the same time, it does, however, have a lower F1 score than RAMP (see Table 3 ). This showcases the importance of using a larger minibatch size, so that an average over several inputs is computed before updating. In fact, its F1 score is on par with the MRT NEG objective, which uses the same minibatch size and incorporates bipolar supervision just as RAMP does. However, RAMP M=1 should still be preferred because the RAMP Table 3 : Answer F1 scores on the NLMAPS V2 test set for RAMP and the MRT objective as well as two further objectives, which help crystallize the difference between the two former objectives, averaged over two independent runs. M is the minibatch size. All models are statistically significant from each other at p < 0.01, except the pair (3, 4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 967, |
|
"end": 986, |
|
"text": "Misra et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 821, |
|
"end": 828, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1544, |
|
"end": 1551, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1899, |
|
"end": 1906, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "objectives are more efficient than MRT objectives.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the case of MRT, for every training instance S + S = 20 queries need to be executed against the database to obtain an answer and corresponding reward. On the other hand, RAMP has to execute at most the 10 queries of the k-best list K, but often less if both a correct and an incorrect query are found earlier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To summarize, RAMP can attribute its success to two factors: First, it discourages parses that receive a wrong answer rather than ignoring them as MRT often does. Second, a larger minibatch size leads to improvements because updates are based on an average over several inputs. Further performance gains can be obtained by using the token-level objective RAMP-T. Finally, RAMP objectives are more efficient because fewer outputs have to be judged.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Parsing", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Ramp Loss Objectives. We consider machine translation (MT) in a weakly supervised domain adaptation setting, where in-domain references are unavailable. In this setting, we obtain weak feedback by matching translation model outputs against cross-lingually linked documents. For each input sentence x, we can obtain a set of relevant documents D + (x) \u2208 D where D is a collection of target language documents. Crosslingual link structures can be found in many multilingual document collections, such as crosslingual citations in patent documents or product categories in e-commerce data. Our example is links between Wikipedia documents. Instead of a reference translation, we use a relevant document d + sampled from D + (x) to guide our search for y + and y \u2212 . As a relevant document provides much weaker supervision than a reference translation, we construct a more informative supervision signal by integrating negative supervision from an irrelevant document d \u2212 sampled from a collection of irrelevant contrast documents. For each input x, the bipolar supervision signal then consists of a pair of sampled documents", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(d + , d \u2212 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Unlike semantic parsing for question answering, our task uses a continuous reward \u03b4(y) \u2208 [0, 1]. In fully supervised MT a sentence-level approximation of the BLEU score can serve as the reward. But computing the BLEU score between a translation and a document does not make sense. We therefore propose two different alternative metrics. The first, \u03b4 1 (y, d), computes how well a translation matches a relevant document. The second, \u03b4 2 (y, d + , d \u2212 ) computes how well a translation differentiates between a relevant and an irrelevant document. \u03b4 1 (y, d) is defined as the average n-gram precision between a hypothesis and a document, multiplied by a brevity penalty. As we do not have a reference length, we include a brevity penalty term that compares the output length to the input length. This ratio can be modified by a factor r that represents the average length difference between source and target language and which can be computed over the training data:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b4 1 (y, d) = 1 N N n=1 u n c(u n , y) \u2022 1 1 u n \u2208d u n c(u n , y) \u2022 BP ,", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where u n are the n-grams present in y, c() counts the occurrences of an n-gram in y, and N is the maximum order of n-grams used. The brevity penalty term is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "BP = min 1, r \u2022 |y| |x| . \u03b4 2 (y, d + , d \u2212 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "is defined as the difference between \u03b4 1 (y, d + ) and \u03b4 1 (y, d \u2212 ), subject to a linear transformation to allow values to lie between 0 and 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\u03b4 2 (y, d + , d \u2212 ) = 0.5 \u2022 (\u03b4 1 (y, d + ) \u2212 \u03b4 1 (y, d \u2212 ) + 1) .", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our intuition behind this metric is that it should measure how well a translation differentiates between the relevant and irrelevant document, leading to domain-specific translations being weighted higher than domain-agnostic ones. Table 4 shows our loss functions for the weakly supervised case. RAMP and RAMP2 define y + and y \u2212 in the same way as is done in the semantic parsing task, except that the metric \u03b4 1 (y, d + ) is used to match outputs against documents. Like Gimpel and Smith (2012) , we include a scaling factor \u03b1 to trade off the importance of the reward against the model score in determining y + and y \u2212 . Note that these objectives do not include negative supervision from d \u2212 . Using the metrics defined above, we formulate two objectives that include d \u2212 : RAMP \u2212 defines y + in the same way as RAMP, but uses a different definition of y \u2212 : Instead of using a fear output with respect to d + (i.e., a translation with high probability and low reward \u03b4 1 (y, d + )), we use a hope output with respect to d \u2212 (i.e., a translation with high probability and high reward \u03b4 1 (y, d \u2212 )). As this translation matches an irrelevant document well, it can be used as a negative output. The same definition of y \u2212 is also used in RAMP1 \u2212 . Note that this objective does not include positive supervision from d + . Finally, RAMP \u03b4 2 incorporates d + and d \u2212 in a different way. This objective defines y + as a hope and y \u2212 as a fear, but uses the joined metric \u03b4 2 (y, d + , d \u2212 ) with respect to the document pair (", |
|
"cite_spans": [ |
|
{ |
|
"start": 474, |
|
"end": 497, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 239, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "d + , d \u2212 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Experimental Setup. We test our objectives on a weakly supervised English-German Wikipedia translation task first proposed in Jehl and Riezler (2016) . In-domain training data are 10,000 English sentences with relevant German documents sampled from the WikiCLIR corpus (Schamoni et al., 2014). 8 The task includes a small in-domain development and test set (dev: 1,712 sentences, test: 1,526 sentences), each consisting of four Wikipedia articles on diverse subjects. Irrelevant documents d \u2212 are sampled from the German side of the News Commentary 9 data set, which contains document boundary information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 149, |
|
"text": "Jehl and Riezler (2016)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Byte-pair encoding (Sennrich et al., 2016) with 30,000 merge operations is applied to all source and target data. Sentences longer than 80 words Loss (Koehn, 2005) , News Commentary v10, and the MultiUN v1 corpus (Eisele and Chen, 2010) . The baseline (MLE) is trained using the MLE objective and ADADELTA (Zeiler, 2012) for 20 epochs. We train on batches of 64 and use dropout for regularization, with a dropout rate of 0.2 for embedding and hidden layers and 0.1 for source and target layers. Gradients are clipped if their norm exceeds 1.0. The metric-augmented objectives are trained using SGD. All hyperparameters are chosen on the development set. For the ramp loss objectives, we use a learning rate of 0.005, \u03b1 = 10, and a k-best size of 16. We compare ramp loss to MRT using both \u03b4 1 (y, d + ) and \u03b4 2 (y, d + , d \u2212 ) as the external cost function, denoted as MRT \u03b4 1 and MRT \u03b4 2 , respectively. MRT is trained using a learning rate of 0.05, S = 16, and S = 10. For testing and validation, translations are obtained using beam search with a beam size of 16. Results are validated every 200 updates and training is run for 25 validations. The stopping point is determined by the BLEU score (Papineni et al., 2001 ) on the development set. We report scores computed with Moses' 10 multi-bleu.perl on tokenized, truecased output. Results are averaged over 2 runs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 42, |
|
"text": "(Sennrich et al., 2016)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 150, |
|
"end": 163, |
|
"text": "(Koehn, 2005)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "(Eisele and Chen, 2010)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1198, |
|
"end": 1220, |
|
"text": "(Papineni et al., 2001", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "y + y \u2212 RAMP arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 1 (y, d + )) arg max y \u03c0 w (y|x) + \u03b1(1 \u2212 \u03b4 1 (y, d + )) RAMP \u2212 arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 1 (y, d + )) arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 1 (y, d \u2212 )) RAMP1 \u2212\u0177 arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 1 (y, d \u2212 )) RAMP2 arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 1 (y, d + ))\u0177 RAMP \u03b4 2 arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 \u03b4 2 (y, d + , d \u2212 )) arg max y \u03c0 w (y|x) + \u03b1(1 \u2212 \u03b4 2 (y, d + , d \u2212 ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Experimental Results. Results for the different objectives can be found in (y, d + ) . Compared with the RAMP objectives, the decrease for MRT \u03b4 1 is smaller. On the other hand, MRT \u03b4 2 , which incorporates bipolar supervision, produces a nominal improvement over the MLE baseline. This objective is outperformed by RAMP \u2212 and RAMP \u03b4 2 . Both objectives produce a small, but significant, improvement of 0.3% BLEU over the MLE baseline. This result shows that bipolar supervision is crucial for success in this weak supervision scenario. It also shows that unlike MRT, for the bipolar ramp loss it does not matter whether \u03b4 1 or \u03b4 2 is used, as they both capture the same idea. The superiority of these objectives over MRT shows again the success of intelligently selecting positive and negative outputs. Another small, but significant, improvement is produced by the token-level variant RAMP \u2212 -T, leading to the best overall result.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 84, |
|
"text": "(y, d + )", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To summarize, we find that for this task, which uses very weak supervision from document-level links, small improvements can be obtained. To achieve these improvements, it is imperative to use objectives that include bipolar supervision from d + and d \u2212 . This finding holds for both ramp loss and MRT. The best overall result is obtained using ramp loss in the token-level variant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Analysis of Translation Results. As the improvements in the translation experiments are very small, we conduct a small-scale analysis to better determine the nature of the gains. Our analysis is inspired by Bentivogli et al. (2016) . We compare the weakly supervised MLE baseline to the best experiment in this setting, which uses the bipolar token-level ramp loss RAMP \u2212 -T.", |
|
"cite_spans": [ |
|
{ |
|
"start": 207, |
|
"end": 231, |
|
"text": "Bentivogli et al. (2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We first analyze the performance by sentence length. We separate the translations into source length brackets and score each bracket separately. The brackets represent quartiles of the source length distribution, ensuring an approximately equal amount of sentences in each bracket. Results are shown in Figure 2 . For all systems, we observe a drop in performance up to an input length of 33. Surprisingly, BLEU scores increase again for the top bracket (source length > 33). For this bracket, we also see the biggest gap between MLE and RAMP \u2212 -T of 0.52 and 0.67% BLEU for the two runs. This increase is mitigated by much weaker increases in the bottom brackets. A possible explanation for the weaker performance of MLE in the top bracket is the observation that hypotheses produced by the MLE system are longer than for RAMP \u2212 -T. For the top bracket, hypothesis lengths exceed reference lengths for all systems. However, for MLE this over-generation is more severe at 106% of the reference length, compared to RAMP \u2212 -T at 102%, potentially causing a higher loss in precision.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 303, |
|
"end": 311, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As our test set consists of parallel sentences extracted from four Wikipedia articles, we can examine the performance for each article separately. Figure 3 shows the results. We observe large differences in performance according to article ID. These are probably caused by some articles being more similar to the out-of-domain training data than others. Comparing RAMP \u2212 -T and MLE, we see that RAMP \u2212 -T outperforms MLE for each article by a small margin. Figure 4 shows the size of the improvements by article. We observe that margins are bigger on articles with better baseline performance. This suggests that there are challenges arising from domain mismatch that are not addressed by our method.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 155, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 457, |
|
"end": 465, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Lastly, we present an examination of example outputs. Table 6 shows an example of a long sentence from Article 2, which describes the German town of Sch\u00fcttorf. This article is originally in German, meaning that our model is back-translating from English into German. The reference contains some awkward or even ungrammatical phrases such as ''was developing itself'', a literal translation from German. The example also illustrates that translating Wikipedia involves handling frequent proper names (there are 11 proper names in the example). Both models struggle with translating proper names, but RAMP \u2212 -T produces the correct phrase ''Gathmann & Gerdemann'', while MLE fails to do so. The RAMP \u2212 -T translation is also fully grammatical, whereas MLE incorrectly translates the main verb phrase ''was developing itself'' into a relative clause, and contains an agreement error in the translation of the noun phrase ''one of the original textile companies''. Although making fewer errors in grammar and proper name translation, RAMP \u2212 -T contains two deletion errors and MLE only contains one. This could be caused by the active optimization of sentence length in the ramp loss model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 54, |
|
"end": 61, |
|
"text": "Table 6", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Weakly Supervised Machine Translation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our work focuses on weakly supervised tasks, but we also conduct experiments using a fully supervised MT task. These experiments are motivated on the one hand by adapting the findings of Gimpel and Smith (2012) to the neural MT paradigm, and on the other hand by expanding the work by Edunov et al. (2018) on applying classical structured prediction losses to neural MT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 210, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 305, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Ramp Loss Objectives. For fully supervised MT we assume access to one or more reference translations\u0233 for each input x. The reward BLEU +1 (y,\u0233) is a per-sentence approximation of the BLEU score. 11 Table 7 shows the different definitions of y + and y \u2212 , which give rise to different ramp losses. RAMP, RAMP1, and RAMP2 are defined analogously to the other tasks. We again include a hyperparameter \u03b1 > 0 interpolating cost function and model score when searching for y + and y \u2212 . Gimpel and Smith (2012) also include the perceptron loss in their analysis. PERC1 is a re-formulation of the Collins perceptron (Collins, 2002) where the reference is used as y + and\u0177 is used as y \u2212 . A comparison with PERC1 is not possible for the weakly supervised tasks in the previous sections, as gold structures are not available for these tasks. With neural MT and subword methods we are able to compute this loss for any reference without running into the problem of reachability that was faced by phrase-based MT (Liang et al., 2006) . However,", |
|
"cite_spans": [ |
|
{ |
|
"start": 482, |
|
"end": 505, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 610, |
|
"end": 625, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1024, |
|
"text": "(Liang et al., 2006)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 206, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Loss y + y \u2212 RAMP arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 BLEU +1 (y,\u0233)) arg max y \u03c0 w (y|x) + \u03b1(1 \u2212 BLEU +1 (y,\u0233)) RAMP1\u0177 arg max y \u03c0 w (y|x) + \u03b1(1 \u2212 BLEU +1 (y,\u0233)) RAMP2 arg max y \u03c0 w (y|x) \u2212 \u03b1(1 \u2212 BLEU +1 (y,\u0233))\u0177 PERC1\u0233\u0177 PERC2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "arg max y BLEU +1 (y,\u0233)\u0177 Table 7 : Configurations for y + and y \u2212 for fully supervised MT.\u0177 is the highest-probability model output,\u0233 is a gold standard reference. \u03c0 w (y|x) is the probability of y according to the model. The arg max y is taken over the k-best list K(x). BLEU +1 is smoothed per-sentence BLEU and \u03b1 is a scaling factor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "using sequence-level training towards a reference can lead to degenerate solutions where the model gives low probability to all its predictions (Shen et al., 2016) . PERC2 addresses this problem by replacing\u0233 by a surrogate translation that achieves the highest BLEU +1 score in K(x). This approach is also used by Edunov et al. (2018) for the loss functions which require an oracle. PERC1 corresponds to equation (9), PERC2 to equation (10) of Gimpel and Smith (2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 163, |
|
"text": "(Shen et al., 2016)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 315, |
|
"end": 335, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 468, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Experimental Setup. We conduct experiments on the IWSLT 2014 German-English task, which is based on Cettolo et al. (2012) in the same way as Edunov et al. (2018) . The training set contains 160K sentence pairs. We set the maximum sentence length to 50 and use BPE with 14,000 merge operations. Edunov et al. (2018) sample 7K sentences from the training set as heldout data. We do the same, but only use one tenth of the data as heldout set to be able to validate often. Our baseline system (MLE) is a BiLSTM encoder-decoder with attention, which is trained using the MLE objective. Word embedding and hidden layer dimensions are set to 256. We use batches of 64 sentences for baseline training and batches of 40 inputs for training RAMP and PERC variants. MRT makes an update after each input using all sampled outputs and resulting in a batch size of 1. All experiments use dropout for regularization, with dropout probability set to 0.2 for embedding and hidden layers and to 0.1 for source and target layers. During MLE-training, the model is validated every 2500 updates and training is stopped if the MLE loss on the heldout set worsens for 10 consecutive validations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 121, |
|
"text": "Cettolo et al. (2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 161, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 314, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "For metric-augmented training, we use SGD for optimization with learning rates optimized on the development set. Ramp losses and PERC2 use a k-best list of size 16. For ramp loss training, we set \u03b1 = 10. RAMP and PERC variants both use a learning rate of 0.001. A new k-best list is generated for each input using the current model parameters. We compare ramp loss to MRT as described above. For MRT, we use SGD with a learning rate of 0.01 and set S = 16 and S = 10. As Edunov et al. (2018) observe beam search to work better than sampling for MRT, we also run an experiment in this configuration, but find no difference between results. As beam search runs significantly slower, we only report sampling experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 471, |
|
"end": 491, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The model is validated on the development set after every 200 updates for experiments with batch size 40 and after 8,000 updates for MRT experiments with batch size 1. The stopping point is determined by the BLEU score on the heldout set after 25 validations. As we are training on the same data as the MLE baseline, we also apply dropout during ramp loss training to prevent overfitting. BLEU scores are computed with Moses' multi-bleu.perl on tokenized, truecased output. Each experiment is run 3 times and results are averaged over the runs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Experimental Results. As shown in Table 8 , all experiments except for PERC1 yield improvements over MLE, confirming that sequencelevel losses that update towards the reference can lead to degenerate solutions. For MRT, our findings show similar performance to the initial experiments reported by Edunov et al. (2018) , who gain 0.24 BLEU points on the same test set. 12 PERC2 and RAMP2, improve over the 12 See their Table 2. Using interpolation with the MLE objective, Edunov et al. (2018) achieve +0.7 BLEU points. As we are only interested in the effect of sequence-level objectives, we do not add MLE interpolation. The best model by Edunov et al. (2018) achieved a BLEU score of 32.91%. It is possible that these scores are not directly comparable to ours due to different pre-and post-processing. They also use a multi-layer CNN architecture (Gehring et al., 2017) , which has been shown to outperform a simple RNN architecture such as ours. The main difference between RAMP and RAMP1, compared to PERC2 and RAMP2, is the fact that the latter objectives use\u0177 as y \u2212 , whereas the former use a fear translation with high probability and low BLEU +1 . We surmise that for this fully supervised task, selecting a y \u2212 which has some known negative characteristics is more important for success than finding a good y + . RAMP, which fulfills both criteria, still outperforms RAMP2. This result re-confirms the superiority of bipolar objectives compared to nonbipolar ones. Although still improving over MLE, token-level ramp loss RAMP-T is outperformed by RAMP by a small margin. This result suggests that when using a metric-augmented objective on top of an MLE-trained model in a full supervision scenario without domain shift, there is little room for improvement from token-level supervision, while gains can still be obtained from additional sequence-level information captured by the external metric, such as information about the sequence length. To summarize, our findings on a fully supervised task show the same small margin for improvement as Edunov et al. (2018) , without any further tuning of performance (e.g., by interpolation with the MLE objective). Bipolar RAMP is found to outperform the other losses. This observation is also consistent with the results by Gimpel and Smith (2012) for phrase-based MT. We conclude that for fully supervised MT, deliberately selecting a hope and fear translation is beneficial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 317, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 491, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 639, |
|
"end": 659, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 871, |
|
"text": "(Gehring et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 2056, |
|
"end": 2076, |
|
"text": "Edunov et al. (2018)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2280, |
|
"end": 2303, |
|
"text": "Gimpel and Smith (2012)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 41, |
|
"text": "Table 8", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Fully Supervised Machine Translation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We presented a study of weakly supervised learning objectives for three neural sequence-tosequence learning tasks. In our first task of semantic parsing, question-answer pairs provide a weak supervision signal to find parses that execute to the correct answer. We show that ramp loss can outperform MRT if it incorporates bipolar supervision where parses that receive negative feedback are actively discouraged. The best overall objective is constituted by the token-level ramp loss. Next, we turn to weak supervision for machine translation in form of cross-lingual document-level links. We present two ramp loss objectives that combine bipolar weak supervision from a linked document d + and an irrelevant document d \u2212 . Again, the bipolar ramp loss objectives outperform MRT, and the best overall result is obtained using tokenlevel ramp loss. Finally, to tie our work to previous work on supervised machine translation, we conduct experiments in a fully supervised scenario where gold references are available and a metricaugmented loss is desired to reduce the exposure bias and the loss-evaluation mismatch. Again, the bipolar ramp loss objective performs best, but we find that the overall margin for improvement is small without any additional engineering. We conclude that ramp loss objectives show promise for neural sequence-to-sequence learning, especially when it comes to weakly supervised tasks where the MLE objective cannot be applied. In contrast to ramp losses that either operate only in the undesirable region of the search space (''cost-augmented decoding'' as in RAMP1) or only in the desirable region of the search space (''cost-diminished decoding'' as in RAMP2), bipolar RAMP operates in both regions of the search space when extracting supervision signals from weak feedback. We showed that MRT can be turned into a bipolar objective by defining a metric that assigns negative values to bad outputs. This improves the performance of MRT objectives. However, the ramp loss objective is still superior as it is easy to implement and efficient to compute. Furthermore, on weakly supervised tasks our novel token-level ramp loss objective RAMP-T can obtain further improvements over its sequence-level counterpart because it can more directly assess which tokens in a sequence are crucial to its success or failure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Note that refer to their objective as an instantiation of REINFORCE, however they build an average over several outputs for one input and thus the objective more accurately falls under the heading of MRT.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We do not use REINFORCE because its updates are based on only one sampled model output, which can lead to high variance. Because it is possible for us to obtain feedback for more than one model output, we employ the more robust MRT that calculates an average over several outputs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We follow the implementation of MRT in NEMATUS with its default settings, including de-duplication of samples and setting the temperature parameter to \u03b1 = 0.005. In case of fully supervised MT where the question arises whether to include the reference in the sample, we choose not to include it in order to be comparable withEdunov et al. (2018) who also do not include it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An implementation of the RAMP objectives can be found at https://github.com/carhaas/nematus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.cl.uni-heidelberg.de/statnlp group/nlmaps/.6 https://www.openstreetmap.org.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The 30-day mark was only hit by RAMP2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WikiCLIR annotates both a stronger mate relation when there is a direct cross-lingual link between documents and a weaker link relation when a there is a bidirectional link between a German mate document and another German document. The experiments reported here use the mate relation. 9 http://casmacat.eu/corpus/news-commentary.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use the BLEU score with add-1 smoothing for n > 1, as proposed byChen and Cherry (2014).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The research reported in this paper was supported in part by DFG grants RI-2221/4-1 and RI 2221/ 2-1. We would like to thank the reviewers for their helpful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Neural machine translation by jointly learning to align and translate", |
|
"authors": [ |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Learning Representations (ICLR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Inter- national Conference on Learning Representa- tions (ICLR). San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Neural versus phrase-based machine translation quality: A case study", |
|
"authors": [ |
|
{ |
|
"first": "Luisa", |
|
"middle": [], |
|
"last": "Bentivogli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arianna", |
|
"middle": [], |
|
"last": "Bisazza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luisa Bentivogli, Arianna Bisazza, Mauro Cettolo, and Marcello Federico. 2016. Neural versus phrase-based machine translation quality: A case study. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing (EMNLP). Austin, TX.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Semantic parsing on freebase from question-answer pairs", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Frostig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Pro- ceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Seattle, WA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "WIT 3 : Web Inventory of Transcribed and Translated Talks", |
|
"authors": [ |
|
{ |
|
"first": "Mauro", |
|
"middle": [], |
|
"last": "Cettolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Girardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT 3 : Web Inventory of Transcribed and Translated Talks. In Proceed- ings of the 16th Conference of the European Association for Machine Translation (EAMT). Trento, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Tighter bounds for structured estimation", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chuong", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Choon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Teo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Chapelle, Chuong B. Do, Choon H. Teo, Quoc V. Le, and Alex J. Smola. 2009. Tighter bounds for structured estimation. In Advances in Neural Information Processing Systems (NIPS). Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A systematic comparison of smoothing techniques for sentence-level BLEU", |
|
"authors": [ |
|
{ |
|
"first": "Boxing", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Cherry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 9th Workshop on Statistical Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boxing Chen and Colin Cherry. 2014. A sys- tematic comparison of smoothing techniques for sentence-level BLEU. In Proceedings of the 9th Workshop on Statistical Machine Trans- lation. Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hope and fear for discriminative training of statistical translation models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Chiang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "13", |
|
"issue": "1", |
|
"pages": "1159--1187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Chiang. 2012. Hope and fear for discrim- inative training of statistical translation models. Journal of Machine Learning Research, 13(1):1159-1187.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bart", |
|
"middle": [], |
|
"last": "Van Merri\u00ebnboer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dzmitry", |
|
"middle": [], |
|
"last": "Bahdanau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Trans- lation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Coarse-to-dine question answering for long documents", |
|
"authors": [ |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hewlett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Lacoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-dine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Better hypothesis testing for statistical machine translation: Controlling for optimizer instability", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Lavie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 2011 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL). Portland, OR.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Driving Semantic Parsing from the World's Response", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Clarke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Goldwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [ |
|
"Roth" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 14th Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving Semantic Parsing from the World's Response. In Proceedings of the 14th Conference on Computational Natural Language Learning. Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Active learning for deep semantic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Duong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hadi", |
|
"middle": [], |
|
"last": "Afshar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Estival", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Glen", |
|
"middle": [], |
|
"last": "Pink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip Cohen, and Mark Johnson. 2018. Active learning for deep semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), Melbourne, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Classical structured prediction losses for sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Sergey", |
|
"middle": [], |
|
"last": "Edunov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for se- quence to sequence learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL). New Orleans, LA.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "MultiUN: A multilingual corpus from united nation documents", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Eisele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from united nation documents. In Proceedings of the Seventh con- ference on International Language Resources and Evaluation (LREC). Valetta, Malta.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Convolutional sequence to sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "Jonas", |
|
"middle": [], |
|
"last": "Gehring", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Denis", |
|
"middle": [], |
|
"last": "Yarats", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Dauphin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Con- ference on Machine Learning (ICML). Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Structured ramp loss minimization for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Struc- tured ramp loss minimization for machine translation. In Proceedings of the 2012 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL).", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Variance reduction techniques for gradient estimation in reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Greensmith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Bartlett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Baxter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "1471--1530", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. 2004. Variance reduction techniques for gradient estimation in reinforcement learn- ing. Journal of Machine Learning Research, 5:1471-1530.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "From language to programs: Bridging reinforcement learning and maximum marginal likelihood", |
|
"authors": [ |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Guu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kelvin Guu, Panupong Pasupat, Evan Liu, and Percy Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A corpus and semantic parser for multilingual natural language querying of openStreetMap", |
|
"authors": [ |
|
{ |
|
"first": "Carolin", |
|
"middle": [], |
|
"last": "Haas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolin Haas and Stefan Riezler. 2016. A cor- pus and semantic parser for multilingual nat- ural language querying of openStreetMap. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (HLT-NAACL). San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Direct loss minimization for structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "Tamir", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Mcallester", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tamir Hazan, Joseph Keshet, and David A. McAllester. 2010. Direct loss minimization for structured prediction. In Advances in Neu- ral Information Processing Systems (NIPS).", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Search-based neural structured learning for sequential question answering", |
|
"authors": [ |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yih", |
|
"middle": [], |
|
"last": "Wen-Tau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Learning to translate from graded and negative relevance information", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Jehl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 26th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Jehl and Stefan Riezler. 2016. Learning to translate from graded and negative relevance information. In Proceedings of the 26th International Conference on Computational Linguistics (COLING). Osaka, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Europarl: A parallel corpus for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Machine Translation Summit", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceed- ings of the Machine Translation Summit, volume 5. Phuket, Thailand.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Semantic parsing with semi-supervised sequential autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ko\u010disk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karl", |
|
"middle": [ |
|
"Moritz" |
|
], |
|
"last": "Hermann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tom\u00e1\u0161 Ko\u010disk\u00fd, G\u00e1bor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and Karl Moritz Hermann. 2016. Semantic parsing with semi-supervised se- quential autoencoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Improving a neural semantic parser by counterfactual learning from human bandit feedback", |
|
"authors": [ |
|
{ |
|
"first": "Carolin", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carolin Lawrence and Stefan Riezler. 2018. Im- proving a neural semantic parser by counter- factual learning from human bandit feedback. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL). Melbourne, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Neural symbolic machines: learning semantic parsers on freebase with weak supervision", |
|
"authors": [ |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Forbus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ni", |
|
"middle": [], |
|
"last": "Lao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen Liang, Jonathan Berant, Quoc V. Le, Kenneth D. Forbus, and Ni Lao. 2017. Neural symbolic machines: learning semantic parsers on freebase with weak supervision. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "An end-to-end discriminative approach to machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proceedings of the 21st International Con- ference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (ACL). Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Policy shaping and generalized update equations for semantic parsing from denotations", |
|
"authors": [ |
|
{ |
|
"first": "Dipendra", |
|
"middle": [], |
|
"last": "Misra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen-Tau", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipendra Misra, Ming-Wei Chang, Xiaodong He, and Wen-tau Yih. 2018. Policy shaping and generalized update equations for semantic parsing from denotations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Coupling distributed and symbolic execution for natural language queries", |
|
"authors": [ |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhengdong", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. 2017. Coupling distributed and symbolic execution for natural language queries. In Proceedings of the 34th International Con- ference on Machine Learning (ICML). Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Computer Intensive Methods for Testing Hypotheses: An Introduction", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Noreen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses: An Intro- duction. Wiley, New York.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Reward augmented maximum likelihood for neural structured prediction", |
|
"authors": [ |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Norouzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Navdeep", |
|
"middle": [], |
|
"last": "Jaitly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dale", |
|
"middle": [], |
|
"last": "Schuurmans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented maximum likelihood for neural struc- tured prediction. In Advances in Neural Infor- mation Processing Systems (NIPS). Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "BLEU: A method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001, BLEU: A method for automatic evaluation of machine trans- lation. Technical Report IBM Research Divi- sion Technical Report, RC22176 (W0190-022), Yorktown Heights, NY.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Compositional semantic parsing on semi-structured tables", |
|
"authors": [ |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Panupong Pasupat and Percy Liang. 2015. Com- positional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP). Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "SQuAD: 100,000+ Questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for machine comprehen- sion of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, TX.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Aurelio", |
|
"middle": [], |
|
"last": "Marc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Auli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zaremba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In International Conference on Learning Representations (ICLR). San Juan, Puerto Rico. Shigehiko Schamoni, Felix Hieber, Artem Sokolov, and Stefan Riezler. 2014. Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL). Baltimore, MD.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Nematus: A toolkit for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Hitschler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcin", |
|
"middle": [], |
|
"last": "Junczys-Dowmunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [], |
|
"last": "L\u00e4ubli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Valerio Miceli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jozef", |
|
"middle": [], |
|
"last": "Barone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Mokry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Nadejde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexandra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Valerio Miceli Barone, Jozef Mokry, and Maria Nadejde. 2017. Nematus: A toolkit for neural machine translation. In Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Valencia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Minimum Risk Training for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Shiqi", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shiqi Shen, Yong Cheng, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Minimum Risk Training for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Minimum risk annealing for training log-linear models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Sequence to sequence learning with neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oriol", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Quoc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Infor- mation Processing Systems (NIPS). Montreal, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Learning structured prediction models: A large margin approach", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vassil", |
|
"middle": [], |
|
"last": "Chatalbashev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 22nd International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd International Con- ference on Machine Learning (ICML), Bonn, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Max-margin Markov networks", |
|
"authors": [ |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Taskar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daphne", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Advances in Neural Information Processing Systems (NIPS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2004. Max-margin Markov networks. In Advances in Neural Information Processing Systems (NIPS). Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronald", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Williams", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Machine Learning", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "229--256", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronald J. Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 20:229-256.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Sequence-to-sequence learning as beam-search optimization", |
|
"authors": [ |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Wiseman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Austin, TX.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Semi-supervised QA with generative domain-adaptive nets", |
|
"authors": [ |
|
{ |
|
"first": "Zhilin", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilin Yang, Junjie Hu, Ruslan Salakhutdinov, and William Cohen. 2017. Semi-supervised QA with generative domain-adaptive nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "ADADELTA: An adaptive learning rate method", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zeiler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. ArXiv e-prints, cs.LG/1212.5701v1.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "Settings for token-level rewards \u03c4 + and \u03c4 \u2212 for hope output y + = ''a small house'' and fear output y \u2212 = ''the house''.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "BLEU scores by sentence length for the MLE Baseline and the RAMP \u2212 -T runs.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "BLEU scores by Wikipedia article for the MLE Baseline and the RAMP \u2212 -T runs.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "Improvements in BLEU scores by Wikipedia article for the RAMP \u2212 -T runs.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Answer F1 scores on the NLMAPS V2 test set for various objectives, averaged over two independent runs. M is the minibatch size. All models are statistically significant from each other at p < 0.01, except the pair (2, 4).", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>are removed from the training set. Our neural MT</td></tr><tr><td>model uses 500-dimensional word embeddings</td></tr><tr><td>and hidden layer dimension of 1,024. Encoder and</td></tr><tr><td>decoder use GRU units. An out-of-domain model</td></tr><tr><td>is trained on 2.1 million sentence pairs from</td></tr><tr><td>Europarl v7</td></tr></table>", |
|
"text": "Configurations for y + and y \u2212 for weakly supervised MT adaptation.\u0177 is the highest-probability model output. \u03c0 w (y|x) is the probability of y under the model. The arg max y is taken over the k-best list K(x). \u03b1 is a scaling factor regulating the influence of the metric compared to the model probability. \u03b4 1 and \u03b4 2 are metrics defined with respect to relevant and irrelevant documents d + and d \u2212 (see Eq. 8 and 9).", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>M</td><td>% BLEU</td><td>\u0394</td></tr><tr><td>1</td><td>MLE</td><td>64</td><td>15.59</td><td/></tr><tr><td>2</td><td>RAMP</td><td>40</td><td>15.03 \u00b1 0.01</td><td>\u2212 0.56</td></tr><tr><td>3</td><td>RAMP1 \u2212</td><td>40</td><td>15.12 \u00b1 0.02</td><td>\u2212 0.47</td></tr><tr><td>4</td><td>RAMP2</td><td>40</td><td>15.19 \u00b1 0.01</td><td>\u2212 0.40</td></tr><tr><td>5</td><td>MRT \u03b4 1</td><td>1</td><td>15.37 \u00b1 0.04</td><td>\u2212 0.22</td></tr><tr><td>6 7</td><td>MRT \u03b4 2 RAMP \u2212</td><td>1 40</td><td>15.70 \u00b1 0.04 15.85 \u00b1 0.02</td><td>+ 0.11 + 0.26</td></tr><tr><td>8 9</td><td>RAMP \u03b4 2 RAMP \u2212 -T</td><td colspan=\"2\">40 40 16.03 * \u00b1 0.02 15.86 \u00b1 0.04</td><td>+ 0.27 + 0.44</td></tr><tr><td>10</td><td>RAMP \u03b4 2 -T</td><td>40</td><td>15.84 \u00b1 0.02</td><td>+ 0.25</td></tr></table>", |
|
"text": "The ramp losses RAMP, RAMP1 \u2212 , and RAMP2, which do not incorporate bipolar supervision from d + and d \u2212 (lines 2, 3, and 4) actually deteriorate 10 https://github.com/moses-smt/mosesdecoder.", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "BLEU scores for weakly supervised MT experiments. Boldfaced results are significantly better than the baseline at p < 0.05 according to multeval(Clark et al., 2011). * marks a significant difference over RAMP \u2212 . in performance. This shows that supervision from only d + or only d \u2212 is insufficient. The deteriorating effect is strongest for RAMP, which uses d + to select both y + and y \u2212 . We explain this by the fact that d + is an imperfect label. Trying to push the model to perfectly reproduce d + will not lead to a good translation. The same observation holds true for MRT \u03b4 1 . This objective only includes the reward \u03b4 1", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Schaloker Junge, Levert Rost und Wilhelm Edel\u00fcbernommen.</td></tr><tr><td>Reference gegen Ende des 19. Jahrhunderts entwickelte sich in Sch\u00fcttorf eine starke Textilindustrie mit</td></tr><tr><td>mehreren gro\u00dfen lokalen Unternehmen (Schlikker & S\u00f6hne, Gathmann & Gerdemann, G.</td></tr><tr><td>Sch\u00fcmer & Co. und ten Wolde, sp\u00e4ter Carl Remy, die heutige RoFa ist keine urspr\u00fcngliche</td></tr><tr><td>Textilfirma, sondern wurde von H. Lammering gegr\u00fcndet und sp\u00e4ter von Gerhard Schlikker jun.,</td></tr><tr><td>Levert Rost und Wilhelm Edel\u00fcbernommen.)</td></tr></table>", |
|
"text": "SourceTowards the end of the 19th century, a strong textile industry was developing itself in Sch\u00fcttorf with several large local businesses(Schlikker & S\u00f6hne, Gathmann & Gerdemann, G. Sch\u00fcmer & Co. and ten Wolde, later Carl Remy; today's RoFa is not one of the original textile companies, but was founded by H. Lammering and later taken over by Gerhard Schlikker jun., Levert Rost and Wilhelm Edel; MLE Ende des 19. Jahrhunderts, eine starke Textilindustrie, die sich in Ettorf mit mehreren gro\u00dfen lokalen Unternehmen (Schlikker & S\u00f6hne, Gathmann & Ger\u00e9ann, G. Schal & Co. und zehn Wolde, sp\u00e4ter Carl Remy) entwickelt hat; die heutige RoFa ist nicht einer der urspr\u00fcnglichen Textilunternehmen, sondern wurde von H. Lammering [gegr\u00fcndet] und sp\u00e4ter von Gerhard Schaloker Junge, Levert Rost und Wilhelm Edel\u00fcbernommen. RAMP \u2212 -T Ende des 19. Jahrhunderts entwickelte sich [in Sch\u00fcttorf] eine starke Textilindustrie mit mehreren gro\u00dfen lokalen Unternehmen (Schlikker & S\u00f6hne, Gathmann & Gerdemann, G. Schal & Co. und zehn Wolde, sp\u00e4ter Carl Remy; die heutige RoFa ist nicht eines der urspr\u00fcnglichen Textilunternehmen, sondern wurde von H. Lammering [gegr\u00fcndet] und sp\u00e4ter von Gerhard", |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "MT example from Article 2 in the test set. All translation errors are underlined. Incorrect proper names are also set in cursive. Omissions are inserted in brackets and set in cursive [like this]. Improvements by RAMP \u2212 -T over MLE are marked in boldface.", |
|
"num": null |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>marks a sig-</td></tr><tr><td>nificant difference to MRT and PERC2, and * *</td></tr><tr><td>marks a difference to RAMP1.</td></tr><tr><td>MLE baseline and PERC1, but perform on a</td></tr><tr><td>par with MRT and each other. Both RAMP and</td></tr><tr><td>RAMP1 are able to outperform MRT, PERC2,</td></tr><tr><td>and RAMP2, with the bipolar objective RAMP</td></tr><tr><td>also outperforming RAMP1 by a narrow margin.</td></tr></table>", |
|
"text": "BLEU scores for fully supervised MT experiments. Boldfaced results are significantly better than MLE at p < 0.01 according to multeval(Clark et al., 2011).", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |