ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:48.662534Z"
},
"title": "An Effective Optimization Method for Neural Machine Translation: The Case of English-Persian Bilingually Low-Resource Scenario",
"authors": [
{
"first": "Benyamin",
"middle": [],
"last": "Ahmadnia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Davis",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Raul",
"middle": [],
"last": "Aranovich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Davis",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a useful optimization method for low-resource Neural Machine Translation (NMT) by investigating the effectiveness of multiple neural network optimization algorithms. Our results confirm that applying the proposed optimization method on English-Persian translation can exceed translation quality compared to the English-Persian Statistical Machine Translation (SMT) paradigm.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a useful optimization method for low-resource Neural Machine Translation (NMT) by investigating the effectiveness of multiple neural network optimization algorithms. Our results confirm that applying the proposed optimization method on English-Persian translation can exceed translation quality compared to the English-Persian Statistical Machine Translation (SMT) paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Employing neural networks in Machine Translation (MT) significantly reduces the time-consuming and laborious operation steps such as word alignment, phrase extraction, feature selection, etc. Although the quality of Neural MT (NMT) models heavily rely on the quantity as well as the quality of the training dataset, considering the low-resource condition, the impact of NMT is still not as much as the Statistical Machine Translation (SMT). NMT has recently achieved great success, which surpasses SMT in many high-resource language pairs, and it has become the MT approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we compare the impact of multiple neural network optimization algorithms under with respect to the low-resource condition, and then, we proposes an effective optimization method for our case-study language pair. The motivation for choosing English and Persian as the case-study is the linguistic differences between these languages, which are from different language families and have significant differences in their properties, may pose a challenge for MT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Following Ahmadnia and Dorr (2019) , lowresource languages are those that have fewer technologies and datasets relative to some measure of their international importance. In simple words, the languages for which bilingual training data is extremely sparse, requiring recourse to techniques that are complementary to standard MT approaches. The biggest issue with low-resource languages is the extreme difficulty of obtaining sufficient resources. Natural Language Processing (NLP) methods that have been created for analysis of lowresource languages are likely to encounter similar issues to those faced by documentary and descriptive linguists whose primary endeavor is the study of minority languages (Ahmadnia et al., 2017) . Lessons learned from such studies are highly informative to NLP researchers who seek to overcome analogous challenges in the computational processing of these types of languages.",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Ahmadnia and Dorr (2019)",
"ref_id": "BIBREF0"
},
{
"start": 703,
"end": 726,
"text": "(Ahmadnia et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results show that the proposed optimization algorithm for English-Persian NMT works well and improves translation results compared to the English-Persian SMT paradigm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is organized as follows; Section 2 describes the methodology. The experimental results and analysis are covered by Section 3. Section 4 investigates the previous related work. Conclusions and future work are provided in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NMT originates from sequence-to-sequence learning. So, in this paper, we take the attention-based (Attentional) NMT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2"
},
{
"text": "Attentional NMT (Bahdanau et al., 2015 ) models are divided into three parts;",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Bahdanau et al., 2015",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "\u2022 Encoder that encodes the source sentences into vector sequences as source language representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "\u2022 Decoder that acquires the source context information through attention mechanism and generates target word sequences in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "\u2022 Attention mechanism that connects encoders and decoders to make the whole model interrelated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "In NMT module, a source sentence x = x 1 , x 2 , ..., x J is encoded into an internal representation h = h 1 , h 2 , ..., h J , and then h is decoded into a target sentence y = y 1 , y 2 , ..., y I . For example, to translate an English sentence the dog likes to eat an apple into Persian, each word is transformed into a 1-hot encoding vector (with a single 1 associated with the index of that word, and all other indexed values 0). Each word in the dataset has a distinct 1-hot encoding vector that serves as a numerical representation that serves as input to the model. The first step toward creating these vectors is to assign an index to each unique word in English (as the input language). This process is then repeated for Persian (as the output language). The assignment of an index to each unique word creates a vocabulary for each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "The encoder portion of the NMT model takes a sentence in English and creates a representational vector from this sentence. This vector represents the meaning of the sentence and is subsequently passed to a decoder which outputs the translation of the sentence in Persian. NMT models the conditional probability of the target sentence as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y|x) = I i=1 P (y i |y <i , x)",
"eq_num": "(1)"
}
],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "where y i is the target word emitted by the decoder at step i and y < i = (y 1 , y 2 , ..., y i\u22121 ). The conditional output probability of a target word y i defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "P (y i |y <i , x) = sof tmax (f (d i , y i\u22121 , c i )) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "where f is a non-linear function and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "d i = g(d i\u22121 , y i\u22121 , c i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": ", g is a non-linear function. c i is a context vector computed as the weighted sum of the hidden vectors h j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = J j=1 \u03b1 t,j h j ,",
"eq_num": "(3)"
}
],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "where h j is the annotation of source word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "x j , \u03b1 t,j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "is computed by what is known as the attentional model, which focuses on sub-parts of the sentence during translation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = exp (score (d i , h j )) J j \u2032 =1 exp (score (d i , h j \u2032 ))",
"eq_num": "(4)"
}
],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "The score function above can be defined in some different ways as discussed by Luong et al. (2015) .",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "The attention mechanism supports memorization of long source sentences in NMT. Rather than building a single context vector out of the encoder's last hidden state, an attention model creates shortcuts between the context vector and the entire source input. The weights of these shortcut connections are customizable for each output element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "The context vector has access to the entire input sequence-for retention of the full context of the sentence-and controls the alignment between the source and target. Stated simply: the attention mechanism converts two sentences into a matrix where the words of one sentence form the columns, and the words of another sentence form the rows. From this, matches are obtained, thus identifying the relevant and yielding a positive impact on MT. Apart from improving the performance on MT, attention-based networks allow models to learn alignments between different modalities (different data types) for e.g., between speech frames and text or between visual features of a picture and its text description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attention-based NMT",
"sec_num": "2.1"
},
{
"text": "Following Ahmadnia and Dorr (2020) , given a training dataset with N bilingual sentences, an attentional NMT training loss function is defined as the conditional log-likelihood:",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "Ahmadnia and Dorr (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Loss = N n=1 I i=1 \u2212logP (y n i |y n <i , x n )",
"eq_num": "(5)"
}
],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "The performance of NMT systems is determined by the method of model optimization. Three typical model optimization methods are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "\u2022 Adam that combines the best properties of the \"AdaGrad\" and \"RMSProp\" algorithms to provide an optimization algorithm that can handle sparse gradients on noisy problems. Its main advantage is that after offset correction, the learning rate of each iteration has a certain range, which makes the parameters more stable. Also, different parameters have different adaptive learning rates, which are suitable for large-scale data sets and high-dimensional parameter space (Kingma and Ba, 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "\u2022 Adadelta which is an extension of \"Adagrad\" that reduces its aggressive, monotonically decreasing learning rate. Instead of accumulat-ing all past squared gradients, Adadelta restricts the window of accumulated past gradients to some fixed size. Its advantage is that the learning rate is adaptive, and the experimental results are reasonable. However, the disadvantage is that the convergence speed is slower (Zeiler, 2012).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "\u2022 Stochastic Gradient Descent (SGD) as an iterative method for optimizing an objective function with suitable smoothness properties can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient by an estimate thereof. Its advantage is that it is simple to implement and the experimental results are more stable and reliable under the appropriate learning rate scheduling scheme. While its disadvantage is that it is difficult to select the appropriate learning rate (Robbins and Monro, 1951) .",
"cite_spans": [
{
"start": 522,
"end": 547,
"text": "(Robbins and Monro, 1951)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "The SGD method calculates the gradient in each iteration of training corpus, and then updates the model parameters which is the most basic optimization method of neural network model. In this paper, this method refers to the mini-batch gradient descent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "In the following section, we compare the different model optimization methods and make a comparative analysis of their application impact on English-Persian attentional NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimization Method",
"sec_num": "2.2"
},
{
"text": "For the experiments, we utilized TEP 1 (Pilevar et al., 2011) English-Persian parallel corpus that contains about 594K sentences. We allocated \u2248550K sentences to training step, \u224810K sentences to validation step, and \u224830K sentences to testing step. We employed Byte Pair Encoding (BPE) (Sennrich et al., 2016b) as an effective way to overcome the unknown word problem in standard NMT. In the experiments, we limited the vocabulary size to the most frequent 10K tokens and replacing the rest with a special token <UNK>. We accelerate training by discarding all sentences with more than 30 elements (either BPE units or actual tokens). The vector dimension of bilingual words is 512, the size of hidden layer is 1024, the beam size is 10, the size of mini-batch is 80, and the dropout of output layer is set to 0.1. In order to reduce the problem of unlisted words, the size of Persian and English dictionaries is set to 20K to cover about 95% words. In order to reduce fitting, we set epoch, as the maximum number of training rounds, to 60. BLEU (Papineni et al., 2001 ) is our standard evaluation metric.",
"cite_spans": [
{
"start": 1044,
"end": 1066,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "We employed the following experimental systems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "\u2022 Moses: We adapted the baseline system on top of Moses (Koehn et al., 2007) as a standard phrase-based SMT.",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "\u2022 RNNSearch: Which is compared with the method using by the paper under the same experimental setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "\u2022 Mantis: We employed Mantis (Cohn et al., 2016) on top of DyNet as the attentional NMT open source system. Its cycle unit is Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) that in which, the default parameter configuration is used.",
"cite_spans": [
{
"start": 29,
"end": 48,
"text": "(Cohn et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 156,
"end": 190,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "\u2022 Mantis+Adadelta/SGD/Adam: which is used as the optimization method of model parameters for English-Persian NMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "Mantis+SGD+DC represents learning rate Decay when the iteration exceeds 40 rounds, and the decay rate is 0.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "As seen in Table 1 , the translation impacts of Model 3, Model 4 and Model 8 are lower than SMT (Model 1). It shows that English-Persian attention-based NMT is ineffective considering the low-resource conditions. Also, the results conform to the characteristics of the general low-resource NMT systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "The results of Model 4, Model 5, Model 6, Model 8, and Model 10 demonstrate that increasing learning rate can extremely improve the English-Persian translation quality where when the optimization method of SGD and Adam were employed. However, when the learning rate is too high, the performance of translation system will be reduced. Therefore, it can be seen in the case of low-resource conditions, the actual NMT system is sensitive in various model optimization methods and corresponding learning rates. So, selecting the appropriate model optimization method and learning rate has a great influence on the final translation results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "Furthermore, Model 7 has achieved the highest translation impact, which surpasses the SMT system using Moses and the NMT system using RNNSearch. It can be found that English-Persian NMT still achieves better translation results by adopting higher learning rates and learning rate scheduling strategies with fewer corpus when choosing appropriate model optimization methods. Figure 1 shows the convergence curves of different system models.",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 382,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "ect can be achieved. Also, from the learning curve, it can be found that the system using Adadelta optimization method converges slowly, and the corresponding translation effect is the worst. Adam optimization method converges quickly. The SGD optimization method uses a large learning rate and achieves the effect of Adam optimization method in about 26 rounds. When the execution learning rate decreases, the translation performance can further be improved, and ultimately the best translation impact can be achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "To construct pseudo bilingual corpus, various useful methods have already been proposed; 1) Back-translation (Sennrich et al., 2016a) , 2) Dual learning (He et al., 2016) , and 3) Round-tripping (Ahmadnia et al., 2018; Ahmadnia and Dorr, 2019) . Also, integrating additional language models to use monolingual corpus (Zoph et al., 2016) , using transfer learning to transfer the model of high-resource language pairs to low-resource ones, etc. The core idea of the mentioned approaches is to integrate more external resources so that the NMT model can sufficiently acquire translation knowledge and augment translation quality. Although the above methods have practically achieved remarkable results, the disadvantage is that the application effect is limited by the quality of external (generated) sentences.",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF14"
},
{
"start": 153,
"end": 170,
"text": "(He et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 195,
"end": 218,
"text": "(Ahmadnia et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 219,
"end": 243,
"text": "Ahmadnia and Dorr, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 317,
"end": 336,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In contrast the existing work, this paper compares various optimization methods of NMT models, and proposes a translation model optimization method which is useful in low-resource condition to enhance the effectiveness of English-Persian NMT. Our method does not employ any additional (generated) resources and has certain generality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this paper, an effective optimization method for bilingually low-resource NMT models was applied to English-Persian translation. The investigated optimization method significantly enhances the impact of the English-Persian NMT system, and surpasses the SMT system and the previous similar work, which achieves the best translation results. Noting worth that our optimization method not only does not depend on external resources but also it has language independence. As a future work, we want to investigate other methods of NMT to enhance effects in low-resource conditions, and the application of this method to other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "http://opus.nlpl.eu/TEP.php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to acknowledge the financial support received from the Department of Linguistics at UC Davis (USA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Augmenting neural machine translation through roundtrip training approach",
"authors": [
{
"first": "Benyamin",
"middle": [],
"last": "Ahmadnia",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 2019,
"venue": "Open Computer Science",
"volume": "9",
"issue": "1",
"pages": "268--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia and Bonnie J. Dorr. 2019. Aug- menting neural machine translation through round- trip training approach. Open Computer Science, 9(1):268-278.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Impact of a new word embedding cost function on farsi-spanish low-resource neural machine translation",
"authors": [
{
"first": "Benyamin",
"middle": [],
"last": "Ahmadnia",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Dorr",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Thirty-Third International FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "222--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia and Bonnie J. Dorr. 2020. Im- pact of a new word embedding cost function on farsi-spanish low-resource neural machine transla- tion. In Proceedings of the Thirty-Third Interna- tional FLAIRS Conference, pages 222-227.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical machine translation for bilingually low-resource scenarios: A roundtripping approach",
"authors": [
{
"first": "Gholamreza",
"middle": [],
"last": "Benyamin Ahmadnia",
"suffix": ""
},
{
"first": "Javier",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Serrano",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE 5th International Congress on Information Science and Technology",
"volume": "",
"issue": "",
"pages": "261--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia, Gholamreza Haffari, and Javier Serrano. 2018. Statistical machine translation for bilingually low-resource scenarios: A round- tripping approach. In Proceedings of the IEEE 5th International Congress on Information Science and Technology, pages 261-265.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Persian-Spanish low-resource statistical machine translation through English as pivot language",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Benyamin Ahmadnia",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "24--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benyamin Ahmadnia, Javier Serrano, and Gholamreza Haffari. 2017. Persian-Spanish low-resource statisti- cal machine translation through English as pivot lan- guage. In Proceedings of Recent Advances in Natu- ral Language Processing, pages 24-30.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incorporating structural alignment biases into an attentional neural translation model",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Cong Duy Vu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Vymolova",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "876--885",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment bi- ases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 876-885.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learn- ing for machine translation. In Proceedings of the 30th Conference on Neural Information Processing Systems.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion, pages 177-180.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Addressing the rare word problem in neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "11--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Ilya Sutskever, Quoc Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing, pages 11-19.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 311-318.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tep: Tehran englishpersian parallel corpus",
"authors": [
{
"first": "Heshaam",
"middle": [],
"last": "Mohammad Taher Pilevar",
"suffix": ""
},
{
"first": "Abdol Hamid",
"middle": [],
"last": "Faili",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pilevar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 12th International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "68--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilevar, Heshaam Faili, and Ab- dol Hamid Pilevar. 2011. Tep: Tehran english- persian parallel corpus. In Proceedings of 12th In- ternational Conference on Intelligent Text Process- ing and Computational Linguistics, pages 68-79.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A stochastic approximation method",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Robbins",
"suffix": ""
},
{
"first": "Sutton",
"middle": [],
"last": "Monro",
"suffix": ""
}
],
"year": 1951,
"venue": "Annals of Mathematical Statistics",
"volume": "22",
"issue": "",
"pages": "400--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Robbins and Sutton Monro. 1951. A stochas- tic approximation method. Annals of Mathematical Statistics, 22:400-407.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 86-96.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics, pages 1715-1725.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Adadelta: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. CoRR, abs/1212.5701.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Contrast of the convergence rates of various translation systems."
}
}
}
}