|
{ |
|
"paper_id": "K18-1033", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:09:52.017824Z" |
|
}, |
|
"title": "Learning to Actively Learn Neural Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Monash University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Traditional active learning (AL) methods for machine translation (MT) rely on heuristics. However, these heuristics are limited when the characteristics of the MT problem change due to e.g. the language pair or the amount of the initial bitext. In this paper, we present a framework to learn sentence selection strategies for neural MT. We train the AL query strategy using a high-resource language-pair based on AL simulations, and then transfer it to the lowresource language-pair of interest. The learned query strategy capitalizes on the shared characteristics between the language pairs to make an effective use of the AL budget. Our experiments on three language-pairs confirms that our method is more effective than strong heuristic-based methods in various conditions, including cold-start and warm-start as well as small and extremely small data conditions.", |
|
"pdf_parse": { |
|
"paper_id": "K18-1033", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Traditional active learning (AL) methods for machine translation (MT) rely on heuristics. However, these heuristics are limited when the characteristics of the MT problem change due to e.g. the language pair or the amount of the initial bitext. In this paper, we present a framework to learn sentence selection strategies for neural MT. We train the AL query strategy using a high-resource language-pair based on AL simulations, and then transfer it to the lowresource language-pair of interest. The learned query strategy capitalizes on the shared characteristics between the language pairs to make an effective use of the AL budget. Our experiments on three language-pairs confirms that our method is more effective than strong heuristic-based methods in various conditions, including cold-start and warm-start as well as small and extremely small data conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Parallel training bitext plays a key role in the quality neural machine translation (NMT). Learning high-quality NMT models in bilingually lowresource scenarios is one of the key challenges, as NMT's quality degrades severely in such setting (Koehn and Knowles, 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 242, |
|
"end": 267, |
|
"text": "(Koehn and Knowles, 2017)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recently, the importance of learning NMT models in scarce parallel bitext scenarios has gained attention. Unsupervised approaches try to learn NMT models without the need for parallel bitext (Artetxe et al., 2017; Lample et al., 2017a) . Dual learning/backtranslation tries to start off from a small amount of bilingual text, and leverage monolingual text in the source and target language (Sennrich et al., 2015a; . Zero/few shot approach attempts to transfer NMT learned from rich bilingual settings to low-resource settings (Johnson et al., 2016; Gu et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 191, |
|
"end": 213, |
|
"text": "(Artetxe et al., 2017;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 235, |
|
"text": "Lample et al., 2017a)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 414, |
|
"text": "(Sennrich et al., 2015a;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 549, |
|
"text": "(Johnson et al., 2016;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 550, |
|
"end": 566, |
|
"text": "Gu et al., 2018)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we approach this problem from the active learning (AL) perspective. Assuming the availability of an annotation budget and a pool of monolingual source text as well as a small training bitext, the goal is to select the most useful source sentences and query their translation from an oracle up to the annotation budget. The queried sentences need to be selected carefully to get the value for the budget, i.e. get the highest improvements in the translation quality of the retrained model. The AL approach is orthogonal to the aforementioned approaches to bilingually lowresource NMT, and can be potentially combined with them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We present a framework to learn the sentence selection policy most suitable and effective for the NMT task at hand. This is in contrast to the majority of work in AL-MT where hard-coded heuristics are used for query selection Bloodgood and Callison-Burch, 2010) . More concretely, we learn the query policy based on a high-resource language-pair sharing similar characteristics with the low-resource language-pair of interest. After trained, the policy is applied to the language-pair of interest capitalising on the learned signals for effective query selection. We make use of imitation learning (IL) to train the query policy. Previous work has shown that the IL approach leads to more effective policy learning ), compared to reinforcement learning (RL) (Fang et al., 2017) . Our proposed method effectively trains AL policies for batch queries needed for NMT, as opposed to the previous work on single query selection.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 261, |
|
"text": "Bloodgood and Callison-Burch, 2010)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 777, |
|
"text": "(Fang et al., 2017)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We conduct experiments on three language pairs Finnish-English, German-English, and Czech-English. Simulating low resource scenarios, we consider various settings, including cold-start and warm-start as well as small and extremely small data conditions. The experiments show the effectiveness and superiority of our policy query compared to strong baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Active learning is an iterative process: Firstly, a model is built using some initially available data. Then, the most worthwhile data points are selected from the unlabelled set for annotation by the oracle. The underlying model is then re-trained using the expanded labeled data. This process is then repeated until the budget is exhausted. The main challenge is how to identify and select the most beneficial unlabelled data points during the AL iterations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Actively Learn MT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The AL strategy can be learned by attempting to actively learn on tasks sampled from a distribution over the tasks (Bachman et al., 2017) . We simulate the AL scenario on instances of a lowresource MT problem created using the bitext of the resource-rich language pair, where the translation of some part of the bitext is kept hidden. This allows to have an automatic oracle to reveal the translations of the queried sentences, resulting in an efficient way to quickly evaluate an AL strategy. Once the AL strategy is learned on simulations, it is then applied to real AL scenarios. The more related are the low-resource language-pair in the real scenario to those used to train the AL strategy, the more effective the AL strategy would be.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 137, |
|
"text": "(Bachman et al., 2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Actively Learn MT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We are interested to train a translation model m \u03c6 \u03c6 \u03c6 which maps an input sentence from a source language x x x \u2208 X to its translation y y y \u2208 Y x x x in a target language, where Y x x x is the set of candidate translations for the input x x x and \u03c6 \u03c6 \u03c6 is the parameter vector of the translation model. Let D = {(x x x, y y y)} be a support set of parallel corpus, which is randomly partitioned into parallel bitext D lab , monolingual text D unl , and evaluation D evl datasets. Repeated random partitioning creates multiple instances of the AL problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning to Actively Learn MT", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A crucial difference of our setting to the previous work (Fang et al., 2017; is that the AL agent receives the reward from the oracle only after taking a sequence of actions, i.e. selection of an AL batch which may correspond to multiple training minibatches for the underlying NMT model. This fulfils the requirements for effective training of NMT, as minibatch updates are more effective than those of single sentence pairs. Furthermore, it is presumably more efficient and practical to query the translation of an untranslated batch from a human translator, rather than one sentence in each AL round.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 76, |
|
"text": "(Fang et al., 2017;", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "At each time step t of an AL problem, the algorithm interacts with the oracle and queries the labels of a batch selected from the pool D unl t to form b b b t . As the result of this sequence of actions to select sentences in b b b t , the AL algorithm receives a reward BLEU(m \u03c6 \u03c6 \u03c6 , D evl ) which is the BLEU score on D evl based on the retrained NMT model using the batch m b b bt \u03c6 \u03c6 \u03c6 . Formally, this results in a hierarchical Markov decision process (HMDP) for batch sentence selection in AL. A state s s s t := D lab t , D unl t , b b b t , \u03c6 \u03c6 \u03c6 t of the HMDP in the time step t consist of the bitext D lab t , the monotext D unl t , the current text batch b b b t , and the parameters of the currently trained NMT model \u03c6 \u03c6 \u03c6 t . The high-level MDP consists of a goal set G := {retrain, halt HI }, where setting a goal g t \u2208 G corresponds to either halting the AL process, or giving the execution to the low-level MDP to collect a new batch of bitext b b b t , re-training the underlying NMT model to get the update parameters \u03c6 \u03c6 \u03c6 t+1 , receiving the reward", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "R HI (s s s t , a t , s s s t+1 ) := BLEU(m \u03c6 \u03c6 \u03c6 t+1 , D evl ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and updating the new state as s s", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "s t+1 = D lab t \u222a b b b t , D unl t , \u2205, \u03c6 \u03c6 \u03c6 t+1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The halt HI goal is set in case the full AL annotation budget is exhausted, otherwise the re-train goal is set in the next time step.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The low-level MDP consists of primitive actions a t \u2208 D unl t \u222a {halt LO } corresponding to either selecting of the monolingual sentences in D unl t , or halting the low-level policy and giving the execution back to the high-level MDP. The halt action is performed in case the maximum amount of source text is chosen for the current AL round, when the oracle is asked for the translation of the source sentences in the monolingual batch, which is then replaced by the resulting bitext. The sentence selection action, on the other hand, forms the next state by adding the chosen monolingual sentence to the batch and removing it from the pool of monolingual sentences. The underlying NMT model is not trained as a result of taking an action in the low-level policy, and the reward function is constant zero.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A trajectory in our HMDP consists of \u03c3 : trajectories", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "= (s s s 1 , g 1 , \u03c4 1 , r 1 , s s s 2 , ..., s s s H , g H , r H , s s s H+1 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u03c4 := (s s s 1 , a 1 , s s s 2 , a 2 , ..., s s s T , a T , s s s T +1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Clearly, the intermediate goals set by the toplevel MDP into the \u03c3 are retrain, and only the last goal g H is halt HI , where H is determined by checking whether the total AL budget B HI is exhausted. Likewise, the intermediate actions in \u03c4 h are sentence selection, and only the last action a T is halt LO , where T is determined by checking whether the round-AL budget B LO is exhausted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We aim to find the optimal AL policy prescribing which datapoint needs to be queried in a given state to get the most benefit. The optimal policy is found by maximising the expected long-term reward, where the expectation is over the choice of the synthesised AL problems and other sources of randomness, i.e. partioing of D into D lab , D unl , and D evl . Following Bachman et al. (2017) , we maximise the sum of the rewards after each AL round to encourage the anytime behaviour, i.e. the model should perform well after each batch query.", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 389, |
|
"text": "Bachman et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hierarchical MDP Formulation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The question remains of how to train the policy network to maximize the reward, i.e. the generalisation performance of the underlying NMT model. As the policy for the high-level MDP is fixed, we only need to learn the optimal policy for the lowlevel MDP. We formulate learning the AL policy as an imitation learning problem. More concretely, the policy is trained using an algorithmic expert, which can generate a reasonable AL trajectories (batches) for each AL state in the highlevel MDP. The algorithmic expert's trajectories, i.e. sequences of AL states paired with the expert's actions in the low-level MDP, are then used to train the policy network. As such, the policy network is a classifier, conditioned on a context summarising both global and local histories, to choose the best sentence (action) among the candidates. After the AL policy is trained based on AL simulations, it is then transferred to the real AL scenario.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deep Imitation learning for AL-NMT", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For simplicity of presentation, the training algorithms are presented using a fixed number of AL iterations for the high-level and low-level MDPs. This corresponds to AL with the sentence-based budget. However, extending them for AL with token-based budget is straightforward, and we experiment with both versions in \u00a75.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deep Imitation learning for AL-NMT", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Policy Network's Architecture The policy scoring network is a fully-connected network with two hidden layers (see Figure 1 ). The input involves the representation for three elements: (i) global context which includes all previous AL batches, (ii) local context which summarises the previous sentences selected for the current AL batch, and (iii) the candidate sentence paired with its translation generated by the currently trained NMT model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 122, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Deep Imitation learning for AL-NMT", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each source sentence x x x paired with its translation y y y, we denote the representation by rep(x x x, y y y). We construct it by simply concatenating the representations of the source and target sentences, each of which is built by summing the embeddings of its words. We found this simple method to work well, compared to more complicated methods, e.g. taking the last hidden state of the decoder in the underlying NMT model. The global context (c c c global ) and local contexts (c c c local ) are constructed by summing the representation of the previously selected batches and sentence-pairs, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deep Imitation learning for AL-NMT", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The IL-based training method is presented in Algorithm 1. The policy network is initialised randomly, and trained based T simulated AL problems (lines 3-20), by portioning the available large bilingual corpus into three sets: (i) D lab as the growing training bitex, (ii) D unl as the pool of untranslated sentences where we pretend the translations are not given, and (iii) D evl as the evaluation set used by our algorithmic expert.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For each simulated AL problem, Algorithm 1 executes T HI iterations (lines 7-19) to collect AL batches for training the underlying NMT model and the policy network. An AL batch is obtained either from the policy network (line 15) or from the algorithmic expert (lines 10-13), depending on tossing a coin (line 9). The latter also includes adding the selected batch, the candidate batches, and the relevant state information to the replay Algorithm 1 Learning AL-NMT Policy Input: Parallel corpus D, Iwidth the width of the constructed search lattices, the coin parameter \u03b1, the number of sampled AL batches K Output: policy \u03c0 1: M \u2190 \u2205 Replay Memory 2: Initialise \u03c0 with a random policy 3: for T training iterations do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "D lab , D evl , D unl \u2190 randomPartition(D) 5: \u03c6 \u03c6 \u03c6 \u2190 trainModel(D lab ) 6:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "c c cglobal \u2190 0 0 0 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "for t \u2190 1 to THI do MDPHI 8: S \u2190 searchLattice(D unl , Iwidth) 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if coinToss(\u03b1) = Head then 10: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "B B B \u2190 {samplePath(S, \u03c6 \u03c6 \u03c6, c c cglobal, \u03c0, \u03b2)} K 1 11: B B B \u2190 B B B + samplePath(S, \u03c6 \u03c6 \u03c6,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "D lab \u2190 D lab + b b b 17: D unl \u2190 D unl \u2212 {x x x s.t. (x x x, y y y) \u2208 b b b} 18: \u03c6 \u03c6 \u03c6 \u2190 retrainModel(\u03c6 \u03c6 \u03c6, D lab ) 19: c c cglobal \u2190 c c cglobal \u2295 rep(b b b) 20: \u03c0 \u2190 updatePolicy(\u03c0, M, \u03c6 \u03c6 \u03c6) 21: return \u03c0 Algorithm 2 samplePath (selecting an AL batch) Input: Search lattice S, global context c c cglobal, policy \u03c0, perturbation probability \u03b2 Output: Selected AL batch b b b 1: b b b \u2190 \u2205 2: c c clocal \u2190 0 0 0 3: for t \u2190 1 to TLO do MDPLO 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "if coinToss(\u03b2) = Head then 5:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x x xt \u2190 \u03c00(S[t]) perturbation policy 6: else 7:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "x memory M , based on which the policy will be retrained. The selected batch is then used to retrain the underlying NMT model, update the training bilingual corpus and pool of monotext, and update the global context vector (lines 16-19).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The mixture of the policy network and algorithmic expert in batch collection on simulated AL problems is inspired by Dataset Aggregation DAGGER (Ross and Bagnell, 2014) . This makes sure that the collected states-actions pairs in the replay memory include situations encountered beyond executing only the algorithmic expert. This informs the trained AL policy how to act reasonably in new situations encountered in the test time, where only the network policy is in charge and the expert does not exist.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 168, |
|
"text": "(Ross and Bagnell, 2014)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u21e1 \u21e1 S[1] S[T LO ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ".... Algorithmic Expert At a given AL state, the algorithmic expert selects a reasonable batch from the pool, D unl via:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "arg max b b b\u2208B B B BLEU(m b b b \u03c6 \u03c6 \u03c6 , D evl )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where m b b b \u03c6 \u03c6 \u03c6 denotes the underlying NMT model \u03c6 further retrained by incorporating the batch b b b, and B B B denotes the possible batches from D unl . However, the number of possible batches is exponential in the size D unl , hence the above optimisation procedure would be very slow even for a moderately-sized pool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We construct a search lattice S from which the candidate batches in B B B are sampled (see Figure 2 ). The search lattice is constructed by sampling a fixed number of candidate sentences I width from D unl for each position in a batch, whose size is T LO . A candidate AL batch is then be selected using Algorithm 2. It executes a mixture of the current AL policy \u03c0 and a perturbation policy \u03c0 0 (e.g. random sentence selection or any other heuristic) in the lower-level MDP to sample a batch. After several such batches are sampled to form B B B, the best one is selected according to eqn 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 100, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have carefully designed the search space to be able to incorporate the current policy's recommended batch and sampled deviations from it in B B B. This is inspired by the LOLS (Locally Optimal Learning to Search) algorithm (Chang et al., 2015) , to invest efforts in the neighbourhood of the current policy and improve it. Moreover, having to deal with only I width number of sentences at each selection stage makes the batch formation algorithm based on the policy fast and efficient.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 246, |
|
"text": "(Chang et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Re-training the Policy Network To train our policy network, we turn sentence preference scores to probabilities over the candidate batches,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "IL-based Training Algorithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Input: Bitext D lab , monotext D unl , pre-trained policy \u03c0 Output: NMT model \u03c6 \u03c6 \u03c6 1: Initialise \u03c6 \u03c6 \u03c6 with D lab 2: c c cglobal \u2190 0 0 0 3: for t \u2190 1 to THI do 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "S \u2190 searchLattice(D unl , Iwidth) 5: b b b \u2190 samplePath(S, \u03c6 \u03c6 \u03c6, c c cglobal, \u03c0, 0) 6: D lab \u2190 D lab + b b b 7: D unl \u2190 D unl \u2212 {x x x s.t. (x x x, y y y) \u2208 b b b} 8: \u03c6 \u03c6 \u03c6 \u2190 retrainModel(\u03c6 \u03c6 \u03c6, D lab ) 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "c c cglobal \u2190 c c cglobal \u2295 rep(b b b) 10: return \u03c6 \u03c6 \u03c6 and optimise the parameters to maximise the loglikelihood objective .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "More specifically, let (c c c global , B B B, b b b) be a training tuple in the replay memory. We define the probability of the correct action/batch as .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The preference score for a batch is the sum of its sentences' preference scores,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "score(b b b, c c c global ) := |b b b| t=1 \u03c0(c c c global , c c c local<t , rep(x x x t , y y y t ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where c c c local<t denotes the local context up to the sentence t in the batch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To form the log-likelihood, we use recent tuples and randomly sample several older ones from the replay memory. We then use stochastic gradient descent (SGD) to maximise the training objective, where the gradient of the network parameters are calculated using the backpropagation algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Transfering the Policy We now apply the policy learnt on the source language pair to AL in the target task (see Algorithm 3). To enable transferring the policy to a new language pair, we make use of pre-trained multilingual word embeddings. In our experiments, we either use the pre-trained word embeddings from Ammar et al. (2016) or build it based on the available bitext and monotext in the source and target language (c.f. \u00a75.2). To retrain our NMT model, we make parameter updates based on the mini-batches from the AL batch as well as sampled mini-batches from the previous iterations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 312, |
|
"end": 331, |
|
"text": "Ammar et al. (2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm 3 Policy Transfer", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Datasets Our experiments use the following language pairs in the news domain based on WMT2018: English-Czech (EN-CS), English-German (EN-DE), English-Finnish (EN-FI). For AL evaluation, we randomly sample 500K sentence pairs from the parallel corpora in WMT2018 for each of the three language pairs, and take 100K as the initially available bitext and the rest of 400K as the pool of untranslated sentences, pretending the translation is not available. During the AL iterations, the translation is revealed for the queried source sentences in order to retrain the underlying NMT model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For pre-processing the text, we normalise the punctuations and tokenise using moses 1 scripts. The trained models are evaluated using BLEU on tokenised and cased sensitive test data from the newstest 2017.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "NMT Model Our baseline model consists of a 2-layer bi-directional LSTM encoder with an embeddings size of 512 and a hidden size of 512. The 1-layer LSTM decoder with 512 hidden units uses an attention network with 128 hidden units. We use a multiplicative-style attention attention architecture (Luong et al., 2015) . The model is optimized using Adam (Kingma and Ba, 2014) with a learning rate of 0.0001, where the dropout rate is set to 0.3. We set the mini-batch size to 200 and the maximum sentence length to 50. We train the base NMT models for 5 epochs on the initially available bitext, as the perplexity on the dev set do not improve beyond more training epochs. After getting new translated text in each AL iteration, we further sample \u00d75 more bilingual sentences from the previously available bitext, and make one pass over this data to re-train the underlying NMT model. For decoding, we use beam-search with the beam size of 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 315, |
|
"text": "(Luong et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We compare our policybased sentence selection for NMT-AL with the following heuristics:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Random We randomly select monolingual sentences up to the AL budget.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Length-based We use shortest/longest monolingual sentences up to the AL budget.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Total Token Entropy (TTE) We sort monolingual sentences based on their TTE which has been shown to be a strong AL heuristic (Settles and Craven, 2008) sequence-prediction tasks. Given a monolingual sentence x x x, we compute the TTE as", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 152, |
|
"text": "(Settles and Craven, 2008)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "|\u0177 y y| i=1 Entropy[P i (.|\u0177 y y <i , x x x, \u03c6 \u03c6 \u03c6)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where\u0177 y y is the decoded translation based on the current underlying NMT model \u03c6 \u03c6 \u03c6, and P i (.|\u0177 y y <i , x x x, \u03c6 \u03c6 \u03c6) is the distribution over the vocabulary words for the position i of the translation given the source sentence and the previously generated words. We also experimented with the normalised version of this measure, i.e. dividing TTE by |\u0177 y y|, and found that their difference is negligible. So we only report TTE results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Selection Strategies", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Setting We train the AL policy on a languagepair treating it as high-resource, and apply it to another language-pair treated as low-resource. To transfer the policies across languages, we make use of pre-trained multilingual word embeddings learned from monolingual text and bilingual dictionaries (Ammar et al., 2016) . Furthermore, we use these cross-lingual word embeddings to initialise the embedding table of the NMT in the lowresource language-pair. The source and target vocabularies for the NMT model in the low-resource scenario are constructed using the initially available 100K bitext, and are expanded during the AL iterations as more translated text becomes available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 318, |
|
"text": "(Ammar et al., 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating from English", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Results Table 1 shows the results. The experiments are performed with two limits on token annotation budget: 135k and 677k corresponding to select roughly 10K and 50K sentences in to- tal in AL 2 , respectively. The number of AL iterations is 50, hence the token annotation budget for each round is 2.7K and 13.5K. As we can see our policy-based AL method is very effective, and outperforms the strong AL baselines in all cases except, when transferring the policy trained on EN \u2192 FI to EN \u2192 CS where it is on-par with the best baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translating from English", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Sentence vs Token Budget In our results in Table 1, we have taken the number of tokens in the selected sentences as a proxy for the annotation cost. Another option to measure the annotation cost is the number of selected sentences, which admittedly is not the best proxy. Nonetheless, one 100K initial bitext 10K initial bitext AL method cold-start warm-start cold-start warm-start Base NMT 10.6/11.8 13.9/14.7 2.3/2.5 5.4/5.8 Random 12.9/13.3 15.1/16.2 5.5/5.6 9.3/9.6 Shortest 13.0/13.5 15.9/16.4 5.9/6.1 9.1/9.3 Longest 12.5/12.9 15.3/15.8 5.7/5.9 9.8/10.2 TTE 12.8/13.2 15.8/16.1 5.9/6.2 9.8/10.1 \u03c0 CS\u2192EN 13.9/14.2 16.8/17.3 6.3/6.5 10.5/10.9 \u03c0 FI\u2192EN 13.5/14.0 16.5/16.9 6.1/6.4 10.2/10.3 \u03c0 EN\u2192CS 13.3/13.6 16.4/16.5 5.1/5.7 10.3/10.5 \u03c0 EN\u2192FI 13.2/13.5 15.9/16.3 5.1/5.6 9.8/10.2 Ensemble \u03c0 CS,FI\u2192EN 14.1/14.3 16.8/17.5 6.3/6.5 10.5/10.9 \u03c0 EN\u2192CS,FI 13.6/13.8 16.5/16.9 5.8/5.9 10.3/10.5 Full Model (500K) 20.5/20.6 22.3/22.5 -- Table 3 : BLEU scores on tests sets using different selection strategies. The token level annotation budget is 677K.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 932, |
|
"end": 939, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translating from English", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "may be interested to see how different AL methods compare against each other based on this cost measure. Table 2 show the results based on the sentencebased annotation cost. We train a policy on EN \u2192 CS, and apply it to EN \u2192 DE and EN \u2192 FI translation tasks. In addition to the token-based AL policy from Table 1, we train another policy based on the sentence budget. The token-based policy is competitive in EN \u2192 DE, where the longest sentence heuristic achieves the best performance, presumably due to the enormous training signal obtained by translation of long sentences. The token-based policy is on par with longest sentence heuristic in EN \u2192 FI for both 10K and 100K AL budgets to outperform the other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 112, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translating from English", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Setting We investigate the performance of the AL methods on DE \u2192 EN based on the policies trained on the other language pairs. In addition to 100K training data condition, we assess the effectiveness of the AL methods in an extremely lowresource condition consisting of only 10K bilingual sentences as the initial bitext.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In addition to the source word embedding table that we initialised in the previous section's experiments using the cross lingual word embeddings, we are able further to initialise all of the other NMT parameters for DE \u2192 EN translation. This includes the target word embedding table and the decoder softmax, as the target language is the same (EN) in the language-pairs used for both policy training and policy testing. We refer to this setting as warm-start, as opposed to cold-start in which we only initialised the source embedding table with the cross-lingual embeddings. For the warm-start experiments, we transfer the NMT trained on 500K CS-EN bitext, based on which the policy is trained. We use byte-pair encoding (BPE) (Sennrich et al., 2015b) with 30K operations to bpe the EN side. For the source side, we use words in order to use the cross-lingual word embeddings. All parameters of the transferred NMT are frozen, except the ones corresponding to the bidirectional RNN encoder and the source word embedding table.", |
|
"cite_spans": [ |
|
{ |
|
"start": 728, |
|
"end": 752, |
|
"text": "(Sennrich et al., 2015b)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To make this experimental condition as realistic as possible, we learn the cross-lingual word embedding for DE using large amounts of monolingual text and the initially available bitext, assuming a multilingual word embedding already exists for the languages used in the policy training phase. More concretely, we sample 5M DE text from WMT2018 data 3 , and train monolingual word embeddings as part of a skip-gram language model using fastText. 4 We then create a bilingual EN-DE word dictionary based on the initially available bitext (either 100K or 10K) using word alignments generated by fast align. 5 The bilingual dictionary is used to project the monolingual DE word embedding space into that of EN, hence aligning the spaces through the following orthogonal projection:", |
|
"cite_spans": [ |
|
{ |
|
"start": 446, |
|
"end": 447, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 605, |
|
"end": 606, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "arg max Q Q Q m i=1 e e e[y i ] T \u2022 Q Q Q \u2022 e e e[x i ] s.t. Q Q Q T \u2022 Q Q Q = I I I where {(y i , x i )} m i=1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "is the bilingual dictionary consisting of pairs of DE-EN words 6 , e e e[y i ] and e e e[x i ] are the embeddings of the DE and EN words, and Q Q Q is the orthogonal transformation matrix aligning the two embedding spaces. We solve the above optimisation problem using SVD as in Smith et al. (2017) . The cross-lingual word embedding for a DE word y is then e e e[y] T \u2022 Q Q Q. We build two such cross-lingual embeddings based on the two bilingual dictionaries constructed from the 10K and 100K bitext, in order to use in their corresponding experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 64, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 279, |
|
"end": 298, |
|
"text": "Smith et al. (2017)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Results Table 3 presents the results, on two conditions of 100K and 10K initial bilingual sentences. For each of these data conditions, we experiments with both cold-start and warm-start settings using the pre-trained multilingual word embeddings from Ammar et al. (2016) or those we have trained with the available bitext plus additional monotext. Firstly, the warm start strategy to transfer the NMT system from CS \u2192 EN to DE \u2192 EN has been very effective, particularly on extremely low bilingual condition of 10K sentence pairs. It is worth noting that our multilingual word embeddings are very effective, even-though they are trained using small bitext. Secondly, our policy-based AL methods are more effective than the baseline methods and lead to up to +1 BLEU score improvements.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 271, |
|
"text": "Ammar et al. (2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We further take the ensemble of multiple trained policies to build a new AL query strategy. In the ensemble, we rank sentences based on each of the policies. Then we produce a final ranking by combining these rankings. Specifically, we sum the ranking of each sentence according to each policy to get a rank score, and re-rank the sentences according to their rank score. Table 3 shows that ensembling is helpful, but does not produce significant improvements compared to the best policy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 372, |
|
"end": 379, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Translating into English", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Distribution of word frequency TTE is a competitive heuristic-based strategy, as shown in the above experiments. We compare the word frequency distributions of the selected source text returned by Random, TTE against our AL policy. The policy we use here is \u03c0 CS\u2192EN and applied on the task of DE\u2192EN, which is conducted in the warm-start scenario with 100K initial bitext and 677K token budget. Fig. 3 is the log-log plot of the fraction of vocabulary words (y axis) having a particular frequency (x axis). Our AL policy is less likely to select high-frequency words than other two methods when it is given a fixed token budget.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 394, |
|
"end": 400, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Weighted combination of heuristics In order to get the intuition of which of the heuristics our AL policy resorts to, we again use policy \u03c0 CS\u2192EN and apply on the task of DE\u2192EN, which is conducted in the warm-start scenario with 100K initial bitext and 677K token budget. Meanwhile, we get the preference scores for the sentences from the monolingual set. Then, we fit a linear regression model based on the sentences and their scores, in which the response variable is the preference score and the predictor variables are extracted features or heuristics based on the sentences. The extracted features are (length, T T E, f 0 , f 1 , f 2 , f 3+ ), where f i is the fraction of words in the sentence that appear i times in the bitext. Table 4 shows the the coeffients of these heuristics, their standard errors (SE) and t values. We can see that our AL policy considers length and TTE in parallel as they have a close range of coefficients, the policy also prefers low frequency than high frequency words. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 735, |
|
"end": 742, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "For statistical MT (SMT), active learning is well explored, e.g. see ; , where several heuristics for query sentence selection have been proposed, including the entropy over the potential translations (uncertainty sampling), query by committee, and a similarity-based sentence selection method. However, active learning is largely under-explored for NMT. The goal of this paper is to provide an approach to learn an active learning strategy for NMT based on a Hierarchical Markov Decision Process (HMDP) formulation of the pool-based AL (Bachman et al., 2017; .", |
|
"cite_spans": [ |
|
{ |
|
"start": 537, |
|
"end": 559, |
|
"text": "(Bachman et al., 2017;", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Expoliting monolingual data for nmt Monolingual data play a key role in neural machine translation systems, previous work have considered training a seperate language model on the target side (Jean et al., 2014; Gulcehre et al., 2015; Domhan and Hieber, 2017) . Rather than using explicit language model, Cheng et al. (2016) introduced an auto-encoder-based approach, in which the source-to-target and target-to-source translation models act as encoder and decoder respectively. Moreover, back translation approaches (Sennrich et al., 2015a; Zhang et al., 2018; Hoang et al., 2018) show efficient use of monolingual data to improve neural machine translation. Dual learning extends back translation by using a deep RL approach. More recently, unsupervised approaches (Lample et al., 2017b; Artetxe et al., 2017) and phrase-based NMT (Lample et al., 2018) learn how to translate when having access to only a large amount of monolingual corpora, these models also extend the use of back translation and cross-lingual word embeddings are provided as the latent semantic space for sentences from monolingual corpora in different languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 211, |
|
"text": "(Jean et al., 2014;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 212, |
|
"end": 234, |
|
"text": "Gulcehre et al., 2015;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 259, |
|
"text": "Domhan and Hieber, 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 324, |
|
"text": "Cheng et al. (2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 541, |
|
"text": "(Sennrich et al., 2015a;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 542, |
|
"end": 561, |
|
"text": "Zhang et al., 2018;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 581, |
|
"text": "Hoang et al., 2018)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 767, |
|
"end": 789, |
|
"text": "(Lample et al., 2017b;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 790, |
|
"end": 811, |
|
"text": "Artetxe et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Meta-AL learning Several meta-AL approaches have been proposed to learn the AL selection strategy automaticclay from data. These methods rely on deep reinforcement learning framework (Yue et al., 2012; Wirth et al., 2017) or bandit algorithms (Nguyen et al., 2017) . Bachman et al. (2017) introduced a policy gradient based method which jointly learns data representation, selection heuristic as well as the model prediction function. Fang et al. (2017) designed an active learning algorithm based on a deep Q-network, in which the action corresponds to binary annotation decisions applied to a stream of data. Woodward and Finn (2017) extended one shot learning to active learning and combined reinforcement learning with a deep recurrent model to make labeling decisions. As far as we know, we are the first one to develop the Meta-AL method to make use of monolingual data for neural machine translation, the method we proposed in this paper can be applied at mini-batch level and conducted in cross lingual settings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 201, |
|
"text": "(Yue et al., 2012;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 221, |
|
"text": "Wirth et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 264, |
|
"text": "(Nguyen et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 288, |
|
"text": "Bachman et al. (2017)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 453, |
|
"text": "Fang et al. (2017)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 635, |
|
"text": "Woodward and Finn (2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have introduced an effective approach for learning active learning policies for NMT, where the learner needs to make batch queries. We have provides a hierarchical MDP formulation of the problem, and proposed a policy network structure capturing the context in both MDP levels. Our policy training method uses imitation learning and a search lattice to carefully collect AL trajectories for further improvement of the current policy. We have provided experimental results on three language pairs, where the policies are transferred across languages using multilingual word embeddings. Our experiments confirms that our method is more effective than strong heuristic-based methods in various conditions, including cold-start and warm-start as well as small and extremely small data conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://www.statmt.org/moses", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These token limit budgets are calculated using random selection of 10K and 50K sentences multiple times, and taking the average of the tokens across the sampled sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We make sure that it does not include the DE sentences in the 400K pool used in the AL experiments.4 https://github.com/facebookresearch/fastText 5 https://github.com/clab/fast align", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "One can incorporate human curated bilingual lexicons to the automatically curated dictionaries as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank the feedback from anonymous reviewers. This work was supported by computational resources from the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) at Monash University, and partly by an NVIDIA GPU grant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Massively multilingual word embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Mulcaire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yulia", |
|
"middle": [], |
|
"last": "Tsvetkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1602.01925" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Unsupervised neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Mikel", |
|
"middle": [], |
|
"last": "Artetxe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gorka", |
|
"middle": [], |
|
"last": "Labaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1710.11041" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural ma- chine translation. arXiv preprint arXiv:1710.11041.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning algorithms for active learning", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Bachman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Trischler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 34th International Conference on Machine Learning", |
|
"volume": "70", |
|
"issue": "", |
|
"pages": "301--310", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Bachman, Alessandro Sordoni, and Adam Trischler. 2017. Learning algorithms for active learning. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 301-310, International Convention Centre, Sydney, Australia. PMLR.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Bucking the trend: Large-scale cost-focused active learning for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Bloodgood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "854--864", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 854-864.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Learning to search better than your teacher", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akshay", |
|
"middle": [], |
|
"last": "Krishnamurthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alekh", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Langford", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 32Nd International Conference on International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2058--2066", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agar- wal, Hal Daum\u00e9, III, and John Langford. 2015. Learning to search better than your teacher. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning, pages 2058-2066.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Semisupervised learning for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yong", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhongjun", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1606.04596" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. arXiv preprint arXiv:1606.04596.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Using targetside monolingual data for neural machine translation through multi-task learning", |
|
"authors": [ |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Domhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hieber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1500--1505", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tobias Domhan and Felix Hieber. 2017. Using target- side monolingual data for neural machine translation through multi-task learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1500-1505.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning how to active learn: A deep reinforcement learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Meng", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1708.02383" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. arXiv preprint arXiv:1708.02383.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Universal neural machine translation for extremely low resource languages", |
|
"authors": [ |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hany", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Victor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.05368" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor OK Li. 2018. Universal neural machine translation for extremely low resource languages. arXiv preprint arXiv:1802.05368.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "On using monolingual corpora in neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Caglar", |
|
"middle": [], |
|
"last": "Gulcehre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kelvin", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Loic", |
|
"middle": [], |
|
"last": "Barrault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huei-Chi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fethi", |
|
"middle": [], |
|
"last": "Bougares", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Holger", |
|
"middle": [], |
|
"last": "Schwenk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1503.03535" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv preprint arXiv:1503.03535.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Active learning for statistical phrase-based machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Roy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "415--423", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415-423. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Active learning for multilingual statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "181--189", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine transla- tion. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL, pages 181-189.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Dual learning for machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingce", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nenghai", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tieyan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Ying", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "820--828", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Advances in Neural Information Processing Systems, pages 820-828.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Iterative backtranslation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Duy", |
|
"middle": [], |
|
"last": "Vu Cong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "18--24", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "On using very large target vocabulary for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "S\u00e9bastien", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyunghyun", |
|
"middle": [], |
|
"last": "Cho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roland", |
|
"middle": [], |
|
"last": "Memisevic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.2007" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large tar- get vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", |
|
"authors": [ |
|
{ |
|
"first": "Melvin", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Schuster", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maxim", |
|
"middle": [], |
|
"last": "Krikun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonghui", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikhil", |
|
"middle": [], |
|
"last": "Thorat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Vigas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Wattenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Macduff", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vigas, Martin Wattenberg, G.s Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's multilingual neural machine translation system: En- abling zero-shot translation.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Adam: A method for stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Diederik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jimmy", |
|
"middle": [], |
|
"last": "Kingma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1412.6980" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Six challenges for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Knowles", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the First Workshop on Neural Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--39", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 28-39. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Unsupervised machine translation using monolingual corpora only", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017a. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Unsupervised machine translation using monolingual corpora only", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.00043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017b. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Phrase-based & neural unsupervised machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.07755" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. arXiv preprint arXiv:1804.07755.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Learning how to actively learn: A deep imitation learning approach", |
|
"authors": [ |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wray", |
|
"middle": [], |
|
"last": "Buntine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gholamreza", |
|
"middle": [], |
|
"last": "Haffari", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning how to actively learn: A deep im- itation learning approach. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Effective approaches to attentionbased neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Minh-Thang", |
|
"middle": [], |
|
"last": "Luong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Pham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.04025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Reinforcement learning for bandit neural machine translation with simulated human feedback", |
|
"authors": [ |
|
{ |
|
"first": "Khanh", |
|
"middle": [], |
|
"last": "Nguyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1707.07402" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Khanh Nguyen, Hal Daum\u00e9 III, and Jordan Boyd- Graber. 2017. Reinforcement learning for bandit neural machine translation with simulated human feedback. arXiv preprint arXiv:1707.07402.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Reinforcement and imitation learning via interactive noregret learning", |
|
"authors": [ |
|
{ |
|
"first": "Stephane", |
|
"middle": [], |
|
"last": "Ross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Bagnell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1406.5979" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephane Ross and J Andrew Bagnell. 2014. Rein- forcement and imitation learning via interactive no- regret learning. arXiv preprint arXiv:1406.5979.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Improving neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1511.06709" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015a. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.07909" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015b. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "An analysis of active learning strategies for sequence labeling tasks", |
|
"authors": [ |
|
{ |
|
"first": "Burr", |
|
"middle": [], |
|
"last": "Settles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Craven", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the conference on empirical methods in natural language processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1070--1079", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Burr Settles and Mark Craven. 2008. An analysis of ac- tive learning strategies for sequence labeling tasks. In Proceedings of the conference on empirical meth- ods in natural language processing, pages 1070- 1079. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Samuel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Turban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nils", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Hamblin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hammerla", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1702.03859" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A survey of preferencebased reinforcement learning methods", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Wirth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Riad", |
|
"middle": [], |
|
"last": "Akrour", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerhard", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "F\u00fcrnkranz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "18", |
|
"issue": "136", |
|
"pages": "1--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Wirth, Riad Akrour, Gerhard Neumann, and Johannes F\u00fcrnkranz. 2017. A survey of preference- based reinforcement learning methods. Journal of Machine Learning Research, 18(136):1-46.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The k-armed dueling bandits problem", |
|
"authors": [ |
|
{ |
|
"first": "Yisong", |
|
"middle": [], |
|
"last": "Yue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Broder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Kleinberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "J. Comput. Syst. Sci", |
|
"volume": "78", |
|
"issue": "5", |
|
"pages": "1538--1556", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. 2012. The k-armed dueling ban- dits problem. J. Comput. Syst. Sci., 78(5):1538- 1556.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Joint training for neural machine translation models with monolingual data", |
|
"authors": [ |
|
{ |
|
"first": "Zhirui", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shujie", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mu", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enhong", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.00353" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and En- hong Chen. 2018. Joint training for neural machine translation models with monolingual data. arXiv preprint arXiv:1803.00353.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "which is the concatenation of interleaved high-level trajectory \u03c4 HI := (s s s 1 , g 1 , r 1 , s s s 2 , .., s s s H+1 ) and low-level score The policy network.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "c c cglobal, \u03c0, 0) 12: b b b \u2190 arg max b b b \u2208B B B BLEU(m b b b \u03c6 \u03c6 \u03c6 , D evl ) expert 13: M \u2190 M \u222a {(c c cglobal, \u03c6 \u03c6 \u03c6, B B B, b b b)} 14: else 15: b b b \u2190 samplePath(S, \u03c6 \u03c6 \u03c6, c c cglobal, \u03c0, 0) policy 16:", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "x xt \u2190 arg max x x x\u2208S[t] \u03c0(c c cglobal, c c clocal, x x x) 8: y y yt \u2190 oracle(x x xt) getting the gold translation 9: c c clocal \u2190 c c clocal \u2295 rep(x x xt, y y yt) 10: b b b \u2190 b b b + (x x xt, y y yt) 11: return b b b", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "The search lattice and the selection of a batch based on the perturbed policy. Each circle denotes a sentence from the pool of monotext. The number of sentences at each time step, denoted by S[t], is I width . The black sentences are selected in this AL batch.", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"text": "P r(b b b|B B B, c c c global ) := score(b b b, c c c global ) b b b \u2208B B B score(b b b , c c c global )", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"text": "On the task of DE\u2192EN, the plot shows the log fraction of words vs the log frequency from the selected data returned by different strategies, in which we have a 677K token budget and do warm start with 100K initial bitext. The AL policy here is \u03c0 CS\u2192EN .", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "BLEU scores on tests sets with different selection strategies, the budget is at token level with annotation for 135.45k tokens and 677.25k tokens respectively.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "BLEU scores on tests sets for different language pairs with different selection strategies, the budget is at sentence level with annotation for 10k sentences and 50k sentences respectively.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "The table gives an estimation of the resorted heuristics.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |