Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:48:55.903825Z"
},
"title": "Universal Neural Machine Translation for Extremely Low Resource Languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong \u2021 Microsoft Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong \u2021 Microsoft Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong \u2021 Microsoft Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Victor",
"middle": [
"O K"
],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Hong Kong \u2021 Microsoft Research",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a new universal machine translation approach focusing on languages with a limited amount of parallel data. Our proposed approach utilizes a transfer-learning approach to share lexical and sentence level representations across multiple source languages into one target language. The lexical part is shared through a Universal Lexical Representation to support multilingual word-level sharing. The sentencelevel sharing is represented by a model of experts from all source languages that share the source encoders with all other languages. This enables the low-resource language to utilize the lexical and sentence representations of the higher resource languages. Our approach is able to achieve 23 BLEU on Romanian-English WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU of strong baseline system which uses multilingual training and back-translation. Furthermore, we show that the proposed approach can achieve almost 20 BLEU on the same dataset through fine-tuning a pre-trained multilingual system in a zero-shot setting.",
"pdf_parse": {
"paper_id": "N18-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a new universal machine translation approach focusing on languages with a limited amount of parallel data. Our proposed approach utilizes a transfer-learning approach to share lexical and sentence level representations across multiple source languages into one target language. The lexical part is shared through a Universal Lexical Representation to support multilingual word-level sharing. The sentencelevel sharing is represented by a model of experts from all source languages that share the source encoders with all other languages. This enables the low-resource language to utilize the lexical and sentence representations of the higher resource languages. Our approach is able to achieve 23 BLEU on Romanian-English WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU of strong baseline system which uses multilingual training and back-translation. Furthermore, we show that the proposed approach can achieve almost 20 BLEU on the same dataset through fine-tuning a pre-trained multilingual system in a zero-shot setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) (Bahdanau et al., 2015) has achieved remarkable translation quality in various on-line large-scale systems (Wu et al., 2016; Devlin, 2017) as well as achieving state-of-the-art results on Chinese-English translation (Hassan et al., 2018) . With such large systems, NMT showed that it can scale up to immense amounts of parallel data in the order of tens of millions of sentences. However, such data is not widely available for all language pairs and domains.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 140,
"end": 157,
"text": "(Wu et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 158,
"end": 171,
"text": "Devlin, 2017)",
"ref_id": "BIBREF6"
},
{
"start": 249,
"end": 270,
"text": "(Hassan et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel universal multilingual NMT approach focusing mainly on low resource languages to overcome the limitations of NMT and leverage the capabilities of multi-lingual NMT in such scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach utilizes multi-lingual neural translation system to share lexical and sentence level representations across multiple source languages into one target language. In this setup, some of the source languages may be of extremely limited or even zero data. The lexical sharing is represented by a universal word-level representation where various words from all source languages share the same underlaying representation. The sharing module utilizes monolingual embeddings along with seed parallel data from all languages to build the universal representation. The sentence-level sharing is represented by a model of language experts which enables low-resource languages to utilize the sentence representation of the higher resource languages. This allows the system to translate from any language even with tiny amount of parallel resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate the proposed approach on 3 different languages with tiny or even zero parallel data. We show that for the simulated \"zero-resource\" settings, our model can consistently outperform a strong multi-lingual NMT baseline with a tiny amount of parallel sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Neural Machine Translation (NMT) (Bahdanau et al., 2015; Sutskever et al., 2014) is based on Sequence-to-Sequence encoder-decoder model along with an attention mechanism to enable better handling of longer sentences (Bahdanau et al., 2015) . Attentional sequence-to-sequence models are modeling the log conditional probability of the Figure 1 : BLEU scores reported on the test set for Ro-En. The amount of training data effects the translation performance dramatically using a single NMT model. translation Y given an input sequence X. In general, the NMT system \u03b8 consists of two components: an encoder \u03b8 e which transforms the input sequence into an array of continuous representations, and a decoder \u03b8 d that dynamically reads the encoder's output with an attention mechanism and predicts the distribution of each target word. Generally, \u03b8 is trained to maximize the likelihood on a training set consisting of N parallel sentences:",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 57,
"end": 80,
"text": "Sutskever et al., 2014)",
"ref_id": "BIBREF22"
},
{
"start": 216,
"end": 239,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L (\u03b8) = 1 N N n=1 log p Y (n) |X (n) ; \u03b8 = 1 N N n=1 T t=1 log p y (n) t |y (n) 1:t\u22121 , f att t (h (n) 1:Ts )",
"eq_num": "(1)"
}
],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "where at each step, f att t builds the attention mechanism over the encoder's output h 1:Ts . More precisely, let the vocabulary size of source words as V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "h 1:Ts = f ext e x 1 , ..., e x Ts , e x = E I (x) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "where E I \u2208 R V \u00d7d is a look-up table of source embeddings, assigning each individual word a unique embedding vector; f ext is a sentencelevel feature extractor and is usually implemented by a multi-layer bidirectional RNN (Bahdanau et al., 2015; Wu et al., 2016) , recent efforts also achieved the state-of-the-art using non-recurrence f ext , e.g. ConvS2S (Gehring et al., 2017) and Transformer (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 247,
"end": 263,
"text": "Wu et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 358,
"end": 380,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 397,
"end": 419,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Extremely Low-Resource NMT Both \u03b8 e and \u03b8 d should be trained to converge using parallel training examples. However, the performance is highly correlated to the amount of training data. As shown in Figure. 1, the system cannot achieve reasonable translation quality when the number of the parallel examples is extremely small (N \u2248 13k sentences, or not available at all N = 0). Lee et al. (2017) and Johnson et al. (2017) have shown that NMT is quite efficient for multilingual machine translation. Assuming the translation from K source languages into one target language, a system is trained with maximum likelihood on the mixed parallel pairs {X (n,k) ",
"cite_spans": [
{
"start": 378,
"end": 395,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 400,
"end": 421,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 649,
"end": 654,
"text": "(n,k)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Figure.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": ", Y (n,k) } n=1...N k k=1...K , that is L (\u03b8) = 1 N K k=1 N k n=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "log p Y (n,k) |X (n,k) ; \u03b8 3where",
"cite_spans": [
{
"start": 17,
"end": 22,
"text": "(n,k)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "N = K k=1 N k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "As the input layer, the system assumes a multilingual vocabulary which is usually the union of all source language vocabularies with a total size as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "V = K k=1 V k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "In practice, it is essential to shuffle the multilingual sentence pairs into mini-batches so that different languages can be trained equally. Multi-lingual NMT is quite appealing for low-resource languages; several papers highlighted the characteristic that make it a good fit for that such as Lee et al. (2017) , Johnson et al. 2017, Zoph et al. (2016) and Firat et al. (2016) . Multi-lingual NMT utilizes the training examples of multiple languages to regularize the models avoiding over-fitting to the limited data of the smaller languages. Moreover, the model transfers the translation knowledge from high-resource languages to low-resource ones. Finlay, the decoder part of the model is sufficiently trained since it shares multilingual examples from all languages.",
"cite_spans": [
{
"start": 294,
"end": 311,
"text": "Lee et al. (2017)",
"ref_id": "BIBREF16"
},
{
"start": 335,
"end": 353,
"text": "Zoph et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 358,
"end": 377,
"text": "Firat et al. (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual NMT",
"sec_num": null
},
{
"text": "Despite the success of training multi-lingual NMT systems; there are a couple of challenges to leverage them for zero-resource languages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges",
"sec_num": "2.1"
},
{
"text": "Lexical-level Sharing Conventionally, a multilingual NMT model has a vocabulary that represents the union of the vocabularies of all source languages. Therefore, the multi-lingual words do not practically share the same embedding space since each word has its own representation. This does not pose a problem for languages with sufficiently large amount of data, yet it is a major limitation for extremely low resource languages since most of the vocabulary items will not have enough, if any, training examples to get a reliably trained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges",
"sec_num": "2.1"
},
{
"text": "A possible solution is to share the surface form of all source languages through sharing sub-units such as subwords (Sennrich et al., 2016b) or characters (Kim et al., 2016; Luong and Manning, 2016; Lee et al., 2017) . However, for an arbitrary lowresource language we cannot assume significant overlap in the lexical surface forms compared to the high-resource languages. The low-resource language may not even share the same character set as any high-resource language. It is crucial to create a shared semantic representation across all languages that does not rely on surface form overlap.",
"cite_spans": [
{
"start": 116,
"end": 140,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF19"
},
{
"start": 155,
"end": 173,
"text": "(Kim et al., 2016;",
"ref_id": "BIBREF13"
},
{
"start": 174,
"end": 198,
"text": "Luong and Manning, 2016;",
"ref_id": "BIBREF17"
},
{
"start": 199,
"end": 216,
"text": "Lee et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges",
"sec_num": "2.1"
},
{
"text": "Sentence-level Sharing It is also crucial for lowresource languages to share source sentence representation with other similar languages. For example, if a language shares syntactic order with another language it should be feasible for the lowresource language to share such representation with another high recourse language. It is also important to utilize monolingual data to learn such representation since the low or zero resource language may have monolingual resources only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Challenges",
"sec_num": "2.1"
},
{
"text": "We propose a Universal NMT system that is focused on the scenario where minimal parallel sentences are available. As shown in Fig. 2 , we introduce two components to extend the conventional multi-lingual NMT system (Johnson et al., 2017) : Universal Lexical Representation (ULR) and Mixture of Language Experts (MoLE) to enable both word-level and sentence-level sharing, respectively.",
"cite_spans": [
{
"start": 215,
"end": 237,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 126,
"end": 132,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Universal Neural Machine Translation",
"sec_num": "3"
},
{
"text": "As we highlighted above, it is not straightforward to have a universal representation for all languages. One potential approach is to use a shared source vocabulary, but this is not adequate since it assumes significant surface-form overlap in order being able to generalize between high-resource and low-resource languages. Alternatively, we could train monolingual embeddings in a shared space and use these as the input to our MT system. However, since these embeddings are trained on a monolingual objective, they will not be optimal for an NMT objective. If we simply allow them to change during NMT training, then this will not generalize to the low-resource language where many of the words are unseen in the parallel data. Therefore, our goal is to create a shared embedding space which (a) is trained towards NMT rather than a monolingual objective, (b) is not based on lexical surface forms, and (c) will generalize from the highresource languages to the low-resource language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Universal Lexical Representation (ULR)",
"sec_num": "3.1"
},
{
"text": "We propose a novel representation for multilingual embedding where each word from any language is represented as a probabilistic mixture of universal-space word embeddings. In this way, semantically similar words from different languages will naturally have similar representations. Our method achieves this utilizing a discrete (but probabilistic) \"universal token space\", and then learning the embedding matrix for these universal tokens directly in our NMT training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Universal Lexical Representation (ULR)",
"sec_num": "3.1"
},
{
"text": "We first define a discrete universal token set of size M into which all source languages will be projected. In principle, this could correspond to any human or symbolic language, but all experiments here use English as the basis for the universal token space. As shown in Figure 2 , we have multiple embedding representations. E Q is language-specific embedding trained on monolingual data and E K is universal tokens embedding. The matrices E K and E Q are created beforehand and are not trainable during NMT training. E U is the embedding matrix for these universal tokens which is learned during our NMT training. It is worth noting that shaded parts in Figure2 are trainable during NMT training process.",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "Therefore, each source word e x is represented as a mixture of universal tokens M of E U .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e x = M i=1 E U (u i ) \u2022 q(u i |x)",
"eq_num": "(4)"
}
],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "where E U is an NMT embedding matrix, which is learned during NMT training. The mapping q projects the multilingual words into the universal space based on their semantic similarity. That is, q(u|x) is a distribution based on the distance D s (u, x) between u and x as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "q(u i |x) = e D(u i ,x)/\u03c4 u j e D(u j ,x)/\u03c4 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "where \u03c4 is a temperature and D(u i , x) is a scalar score which represents the similarity between source word x and universal token u i :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D(u, x) = E K (u) \u2022 A \u2022 E Q (x) T",
"eq_num": "(6)"
}
],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "where E K (u) is the \"key\" embedding of word u, E Q (x) is the \"query\" embedding of source word x. The transformation matrix A, which is initialized to the identity matrix, is learned during NMT training and shared across all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "This is a key-value representation, where the queries are the monolingual language-specific embedding, the keys are the universal tokens embeddings and the values are a probabilistic distribution over the universal NMT embeddings. This can represent unlimited multi-lingual vocabulary that has never been observed in the parallel training data. It is worth noting that the trainable transformation matrix A is added to the query matching mechanism with the main purpose to tune the similarity scores towards the translation task. A is shared across all languages and optimized discriminatively during NMT training such that the system can fine-tune the similarity score q() to be optimal for NMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexicon Mapping to the Universal Token Space",
"sec_num": null
},
{
"text": "In general, we create one E Q matrix per source language, as well as a single E K matrix in our universal token language. For Equation 6 to make sense and generalize across language pairs, all of these embedding matrices must live in a similar semantic space. To do this, we first train off-the-shelf monolingual word embeddings in each language, and then learn one projection matrix per source language which maps the original monolingual embeddings into E K space. Typically, we need a list of source word -universal token pairs (seeds S k ) to train the projection matrix for language k. Since vectors are normalized, learning the optimal projection is equivalent to finding an orthogonal transformation O k that makes the projected word vectors as close as to its corresponded universal tokens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max O k (x,\u1ef9)\u2208S k E Q k (x) \u2022 O k \u2022 E K (\u1ef9) T s.t. O T k O k = I, k = 1, ..., K",
"eq_num": "(7)"
}
],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "which can be solved by SVD decomposition based on the seeds (Smith et al., 2017) . In this paper, we chose to use a short list of seeds from automatic word-alignment of parallel sentences to learn the projection. However, recent efforts (Artetxe et al., 2017; Conneau et al., 2018 ) also showed that it is possible to learn the transformation without any seeds, which makes it feasible for our proposed method to be utilized in purely zero parallel resource cases. It is worth noting that O k is a language-specific matrix which maps the monolingual embeddings of each source language into a similar semantic space as the universal token language.",
"cite_spans": [
{
"start": 60,
"end": 80,
"text": "(Smith et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 237,
"end": 259,
"text": "(Artetxe et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 260,
"end": 280,
"text": "Conneau et al., 2018",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "Interpolated Embeddings Certain lexical categories (e.g. function words) are poorly captured by Equation 4. Luckily, function words often have very high frequency, and can be estimated robustly from even a tiny amount of data. This motivates an interpolated e x where embeddings for very frequent words are optimized directly and not through the universal tokens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "\u03b1(x)E I (x) + \u03b2(x) M i=1 E U (u i ) \u2022 q(u i |x) (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "Where E I (x) is a language-specific embedding of word x which is optimized during NMT training. In general, we set \u03b1(x) to 1.0 for the top k most frequent words in each language, and 0.0 otherwise, where k is set to 500 in this work. It is worth noting that we do not use an absolute frequency cutoff because this would cause a mismatch between highresource and low-resource languages, which we want to avoid. We keep \u03b2(x) fixed to 1.0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "An Example To give a concrete example, imagine that our target language is English (En), our high-resource auxiliary source languages are Spanish (Es) and French (Fr), and our low-resource source language is Romanian (Ro). En is also used for the universal token set. We assume to have 10M+ parallel Es-En and Fr-En, and a few thousand in Ro-En. We also have millions of monolingual sentences in each language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "We first train word2vec embeddings on monolingual corpora from each of the four languages. We next align the Es-En, Fr-En, and Ro-En parallel corpora and extract a seed dictionary of a few hundred words per language, e.g., gato \u2192 cat, chien \u2192 dog. We then learn three matrices O 1 , O 2 , O 3 to project the Es, Fr and Ro embed-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "dings (E Q 1 , E Q 2 , E Q 3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": ", into En (E K ) based on these seed dictionaries. At this point, Equation 5 should produce reasonable alignments between the source languages and En, e.g., q(horse|magar) = 0.5, q(donkey|magar) = 0.3, q(cow|magar) = 0.2, where magar is the Ro word for donkey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Monolingual Embeddings",
"sec_num": null
},
{
"text": "As we paved the road for having a universal embedding representation; it is crucial to have a languagesensitive module for the encoder that would help in modeling various language structures which may vary between different languages. We propose a Mixture of Language Experts (MoLE) to model the sentence-level universal encoder. As shown in Fig. 2 , an additional module of mixture of experts is used after the last layer of the encoder. Similar to , we have a set of expert networks and a gating network to control the weight of each expert. More precisely, we have a set of expert networks as f 1 (h), ..., f K (h) where for each expert, a two-layer feed-forward network which reads the output hidden states h of the encoder is utilized. The output of the MoLE module h will be a weighted sum of these experts to replace the encoder's representation:",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 348,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Mixture of Language Experts (MoLE)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h = K k=1 f k (h) \u2022 softmax(g(h)) k ,",
"eq_num": "(9)"
}
],
"section": "Mixture of Language Experts (MoLE)",
"sec_num": "3.2"
},
{
"text": "where an one-layer feed-forward network g(h) is used as a gate to compute scores for all the experts. In our case, we create one expert per auxiliary language. In other words, we train to only use expert f i when training on a parallel sentence from auxiliary language i. Assume the language 1...K \u2212 1 are the auxiliary languages. That is, we have a multi-task objective as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture of Language Experts (MoLE)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L gate = K\u22121 k=1 N k n=1 log [softmax (g(h)) k ]",
"eq_num": "(10)"
}
],
"section": "Mixture of Language Experts (MoLE)",
"sec_num": "3.2"
},
{
"text": "We do not update the MoLE module for training on a sentence from the low-resource language. Intuitively, this allows us to represent each token in the low-resource language as a context-dependent mixture of the auxiliary language experts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mixture of Language Experts (MoLE)",
"sec_num": "3.2"
},
{
"text": "We extensively study the effectiveness of the proposed methods by evaluating on three \"almost-zeroresource\" language pairs with variant auxiliary languages. The vanilla single-source NMT and the multi-lingual NMT models are used as baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Dataset We empirically evaluate the proposed Universal NMT system on 3 languages -Romanian (Ro) / Latvian (Lv) / Korean (Ko) -translating to English (En) in near zero-resource settings. To achieve this, single or multiple auxiliary languages from Czech (Cs), German (De), Greek (El), Spanish (Es), Finnish (Fi), French (Fr), Italian (It), Portuguese (Pt) and Russian (Ru) are jointly trained. The detailed statistics and sources of the available parallel resource can be found in Table 1 , where we further down-sample the corpora for the targeted languages to simulate zero-resource.",
"cite_spans": [],
"ref_spans": [
{
"start": 480,
"end": 487,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "It also requires additional large amount of monolingual data to obtain the word embeddings for each language, where we use the latest Wikipedia dumps 5 for all the languages. Typically, the monolingual corpora are much larger than the parallel corpora. For validation and testing, the standard validation and testing sets are utilized for each targeted language. Preprocessing All the data (parallel and monolingual) have been tokenized and segmented into subword symbols using byte-pair encoding (BPE) (Sennrich et al., 2016b) . We use sentences of length up to 50 subword symbols for all languages. For each language, a maximum number of 40, 000 BPE operations are learned and applied to restrict the size of the vocabulary. We concatenate the vocabularies of all source languages in the multilingual setting where special a \"language marker \" have been appended to each word so that there will be no embedding sharing on the surface form. Thus, we avoid sharing the representation of words that have similar surface forms though with different meaning in various languages.",
"cite_spans": [
{
"start": 503,
"end": 527,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "Architecture We implement an attention-based neural machine translation model which consists of a one-layer bidirectional RNN encoder and a two-layer attention-based RNN decoder. All RNNs have 512 LSTM units (Hochreiter and Schmidhuber, 1997) . Both the dimensions of the source and target embedding vectors are set to 512. The dimensionality of universal embeddings is also the same. For a fair comparison, the same architecture is also utilized for training both the vanilla and multilingual NMT systems. For multilingual experiments, 1 \u223c 5 auxiliary languages are used. When training with the universal tokens, the temperature \u03c4 (in Eq. 6) is fixed to 0.05 for all the experiments.",
"cite_spans": [
{
"start": 208,
"end": 242,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "Learning All the models are trained to maximize the log-likelihood using Adam (Kingma and Ba, 2014) optimizer for 1 million steps on the mixed dataset with a batch size of 128. The dropout rates for both the encoder and the decoder is set to 0.4. We have open-sourced an implementation of the proposed model. 6",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We utilize back-translation (BT) (Sennrich et al., 2016a) to encourage the model to use more information of the zero-resource languages. More concretely, we build the synthetic parallel corpus by translating on monolingual data 7 with a trained translation system and use it to train a backward direction translation model. Once trained, the same operation can be used on the forward direction. Generally, BT is difficult to apply for zero resource setting since it requires a reasonably good translation system to generate good quality synthetic parallel data. Such a system may not be feasible with tiny or zero parallel data. However, it is possible to start with a trained multi-NMT model.",
"cite_spans": [
{
"start": 33,
"end": 57,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Translation",
"sec_num": "4.2"
},
{
"text": "Training Monolingual Embeddings We train the monolingual embeddings using fastText 8 (Bojanowski et al., 2017) over the Wikipedia corpora of all the languages. The vectors are set to 300 dimensions, trained using the default setting of skip-gram . All the vectors are normalized to norm 1.",
"cite_spans": [
{
"start": 85,
"end": 110,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary Experiments",
"sec_num": "4.3"
},
{
"text": "Pre-projection In this paper, the pre-projection requires initial word alignments (seeds) between words of each source language and the universal tokens. More precisely, for the experiments of Ro/Ko/Lv-En, we use the target language (En) as the universal tokens; fast_align 9 is used to automatically collect the aligned words between the source languages and English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminary Experiments",
"sec_num": "4.3"
},
{
"text": "We show our main results of multiple source languages to English with different auxiliary languages in Table 2 . To have a fair comparison, we use only 6k sentences corpus for both Ro and Lv with all the settings and 10k for Ko. It is obvious that applying both the universal tokens and mixture of experts modules improve the overall translation quality for all the language pairs and the improvements are additive.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To examine the influence of auxiliary languages, we tested four sets of different combinations of auxiliary languages for Ro-En and two sets for Lv-En. It shows that Ro performs best when the auxiliary languages are all selected in the same family (Ro, Es, Fr, It and Pt are all from the Romance family of European languages) which makes sense as more knowledge can be shared across the same family. Similarly, for the experiment of Lv-En, improvements are also observed when adding Ru as additional auxiliary language as Lv and Ru share many similarities because of the geo-graphical influence even though they don't share the same alphabet. We also tested a set of Ko-En experiments to examine the generalization capability of our approach on non-European languages while using languages of Romance family as auxiliary languages. Although the BLEU score is relatively low, the proposed methods can consistently help translating less-related low-resource languages. It is more reasonable to have similar languages as auxiliary languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We perform thorough experiments to examine effectiveness of the proposed method; we do ablation study on Ro-En where all the models are trained Table 3 : BLEU scores evaluated on test set (6k), compared with ULR and MoLE. \"vanilla\" is the standard NMT system trained only on Ro-En training set based on the same Ro-En corpus with 6k sentences. As shown in Table 3 , it is obvious that 6k sentences of parallel corpora completely fails to train a vanilla NMT model. Using Multi-NMT with the assistance of 7.8M auxiliary language sentence pairs, Ro-En translation performance gets a substantial improvement which, however, is still limited to be usable. By contrast, the proposed ULR boosts the Multi-NMT significantly with +5.07 BLEU, which is further boosted to +7.98 BLEU when incorporating sentence-level information using both MoLE and BT. Furthermore, it is also shown that ULR works better when a trainable transformation matrix A is used (4th vs 5th row in the table). Note that, although still 5 \u223c 6 BLEU scores lower than the full data (\u00d7100 large) model.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 151,
"text": "Table 3",
"ref_id": null
},
{
"start": 356,
"end": 363,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.1"
},
{
"text": "We also measure the translation quality of simply training the vanilla system while replacing each token of the Ro sentence with its closet universal token in the projected embedding space, considering we are using the target languages (En) as the universal tokens. Although the performance is much worse than the baseline Multi-NMT, it still outperforms the vanilla model which implies the effectiveness of the embedding alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.1"
},
{
"text": "Monolingual Data In Table. 3, we also showed the performance when incorporating the monolingual Ro corpora to help the UniNMT training in both cases with and without ULR. The backtranslation improves in both cases, while the ULR still obtains the best score which indicates that the gains achieved are additive.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 26,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.1"
},
{
"text": "Corpus Size As shown in Fig. 3 , we also evaluated our methods with varied sizes -0k 10 , 6k, 60k and 600k -of the Ro-En corpus. The vanilla NMT and the multi-lingual NMT are used as baselines. It is clear in all cases that the performance gets better when the training corpus is larger. However, the multilingual with ULR works much better with a small amount of training examples. Note that, the usage of ULR universal tokens also enables us to directly work on a \"pure zero\" resource translation with a shared multilingual NMT model. Unknown Tokens One explanation on how ULR help the translation for almost zero resource languages is it greatly cancel out the effects of missing tokens that would cause out-of-vocabularies during testing. As in Fig. 4 , the translation performance heavily drops when it has more \"unknown\" which cannot be found in the given 6k training set, especially for the typical multilingual NMT. Instead, these \"unknown\" tokens will naturally have their embeddings based on ULR projected universal tokens even if we never saw them in the training set. When we apply back-translation over the monolingual data, the performance further improves which can almost catch up with the model trained with 60k data.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 30,
"text": "Fig. 3",
"ref_id": null
},
{
"start": 749,
"end": 755,
"text": "Fig. 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.1"
},
{
"text": "Examples Figure 5 shows some cherry-picked examples for Ro-En. Example (a) shows how the lexical selection get enriched when introducing ULR (Lex-6K) as well as when adding Back Translation (Lex-6K-BT). Example (b) shows the effect of using romance vs non-romance languages as the supporting languages for Ro. Example (c) shows the importance of having a trainable A as have been discussed; without trainable A the model confuses \"india\" and \"china\" as they may have close representation in the mono-lingual embeddings. Figure 6 shows the activations along with the same source sentence with various auxiliary languages. It is clear that MoLE is effectively switching between the experts when dealing with zero-resource language words. For this particular example of Ro, we can see that the system is utilizing various auxiliary languages based on their relatedness to the source language. We can approximately rank the relatedness based of the influence of each language. For instance, the influence can be approximately ranked as Es \u2248 P t > F r \u2248 It > Cs \u2248 El > De > F i, which is interestingly close to the grammatical relatedness of Ro to these languages. On the other hand, Cs has a strong influence although it does not fall in the same language family with Ro, we think this is due to the geo-graphical influence between the two languages since Cs and Ro share similar phrases and expressions. This shows that MoLE learns to utilize resources from similar languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 17,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 520,
"end": 528,
"text": "Figure 6",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.2"
},
{
"text": "All the described experiments above had the low resource languages jointly trained with all the auxiliary high-resource languages, where the training of the large amount of high-resource languages can be seen as a sort of regularization. It is also common to train a model on high-resource languages first, and then fine-tune the model on a small resource language similar to transfer learning approaches (Zoph et al., 2016) . However, it is not trivial to effectively fine-tune NMT models on extremely low resource data since the models easily over-fit due to overparameterization of the neural networks.",
"cite_spans": [
{
"start": 405,
"end": 424,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning a Pre-trained Model",
"sec_num": "5.3"
},
{
"text": "In this experiment, we have explored the finetuning tasks using our approach. First, we train a Multi-NMT model (with ULR) on {Es, Fr, It, Pt}-En languages only to create a zero-shot setting for Ro-En translation. Then, we start fine-tuning the model with 6k parallel corpora of Ro-En, with and without ULR. As shown in Fig. 7 , both models improve a lot over the baseline. With the help of ULR, we can achieve a BLEU score of around 10.7 (also shown in Fig. 3) for Ro-En translation with \"zero-resource\" translation. The BLEU score can further improve to almost 20 BLEU after 3 epochs of training on 6k sentences using ULR. This is almost 6 BLEU higher than the best score of the (a) Source situatia este putin diferita atunci cand sunt analizate separat raspunsurile barbatilor si ale femeilor . Reference the situation is slightly different when responses are analysed separately for men and women . Mul-6k the situation is less different when it comes to issues of men and women . Mul-60k the situation is at least different when it is weighed up separately by men and women . Lex-6k the situation is somewhat different when we have a separate analysis of women 's and women 's responses . Lex-6k +BT the situation is slightly different when it is analysed separately from the responses of men and women .",
"cite_spans": [],
"ref_spans": [
{
"start": 320,
"end": 326,
"text": "Fig. 7",
"ref_id": "FIGREF5"
},
{
"start": 454,
"end": 461,
"text": "Fig. 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Fine-tuning a Pre-trained Model",
"sec_num": "5.3"
},
{
"text": "(b) Source ce nu stim este in cat timp se va intampla si cat va dura . Reference what we don ' t know is how long all of that will take and how long it will last . Lex Romancewhat we do not know is how long it will be and how long it will take . Lex (Non-Rom) what we know is as long as it will happen and how it will go (c) Source limita de greutate pentru acestea dateaza din anii ' 80 , cand air india a inceput sa foloseasca grafice cu greutatea si inaltimea ideale . Reference he weight limit for them dates from the ' 80s , when air india began using ideal weight and height graphics . Lex (A = I) the weight limit for these dates back from the 1960s , when the chinese air began to use physiars with weight and the right height . Lex the weight limit for these dates dates from the 1980s , when air india began to use the standard of its standard and height . baseline. It is worth noting that this fine-tuning is a very efficient process since it only takes less than 2 minutes to train for 3 epochs over such tiny amount of data. This is very appealing for practical applications where adapting a per-trained system on-line is a big advantage. As a future work, we will further investigate a better fine-tuning strategy such as meta-learning (Finn et al., 2017) using ULR.",
"cite_spans": [
{
"start": 250,
"end": 259,
"text": "(Non-Rom)",
"ref_id": null
},
{
"start": 1251,
"end": 1270,
"text": "(Finn et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning a Pre-trained Model",
"sec_num": "5.3"
},
{
"text": "Multi-lingual NMT has been extensively studied in a number of papers such as Lee et al. (2017 ), Johnson et al. (2017 , Zoph et al. (2016) and Firat et al. (2016) . As we discussed, these approaches have significant limitations with zero-resource cases. Johnson et al. (2017) is more closely related to our current approach, our work is extending it to overcome the limitations with very low-resource languages and enable sharing of lexical and sentence representation across multiple languages. Two recent related works are targeting the same problem of minimally supervised or totally unsupervised NMT. Artetxe et al. (2018) proposed a totally unsupervised approach depending on multi-lingual embedding similar to ours and duallearning and reconstruction techniques to train the model from mono-lingual data only. also proposed a quite similar approach while utilizing adversarial learning.",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "Lee et al. (2017",
"ref_id": "BIBREF16"
},
{
"start": 94,
"end": 117,
"text": "), Johnson et al. (2017",
"ref_id": "BIBREF12"
},
{
"start": 120,
"end": 138,
"text": "Zoph et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 143,
"end": 162,
"text": "Firat et al. (2016)",
"ref_id": "BIBREF8"
},
{
"start": 254,
"end": 275,
"text": "Johnson et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 605,
"end": 626,
"text": "Artetxe et al. (2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "In this paper, we propose a new universal machine translation approach that enables sharing resources between high resource languages and extremely low resource languages. Our approach is able to achieve 23 BLEU on Romanian-English WMT2016 using a tiny parallel corpus of 6k sentences, compared to the 18 BLEU of strong multilingual baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://www.statmt.org/wmt16/translation-task.html 2 https://sites.google.com/site/koreanparalleldata/ 3 http://www.statmt.org/europarl/ 4 http://opus.lingfil.uu.se/MultiUN.php (subset) 5 https://dumps.wikimedia.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/MultiPath/NA-NMT/tree/universal_translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used News Crawl provided by WMT16 for Ro-En. 8 https://github.com/facebookresearch/fastText 9 https://github.com/clab/fast_align",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For 0k experiments, we used the pre-projection learned from 6k data. It is also possible to use unsupervised learned dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised neural machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In Proceedings of International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of International Conference on Learning Representa- tions (ICLR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics 5:135-146.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In ICLR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sharp models on dull hardware: Fast and accurate neural machine translation decoding on the cpu",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2810--2815",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin. 2017. Sharp models on dull hardware: Fast and accurate neural machine translation decod- ing on the cpu. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Copenhagen, Denmark, pages 2810-2815.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Model-agnostic meta-learning for fast adaptation of deep networks",
"authors": [
{
"first": "Chelsea",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Ma- chine Learning (ICML).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In Proceedings of Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Achieving human parity on",
"authors": [
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Aue",
"suffix": ""
},
{
"first": "Chang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Vishal",
"middle": [],
"last": "Chowdhary",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Renqian",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shuangzhi",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Feder- mann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on auto- matic chinese to english news translation. CoRR abs/1803.05567.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735- 1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics 5:339-351.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Character-aware neural language models",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI'16",
"volume": "",
"issue": "",
"pages": "2741--2749",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI'16, pages 2741-2749.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In Proceedings of International Conference on Learning Representations (ICLR). Vancouver, Canada.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2017,
"venue": "TACL",
"volume": "5",
"issue": "",
"pages": "365--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2017. Fully character-level neural machine trans- lation without explicit segmentation. TACL 5:365- 378.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Achieving open vocabulary neural machine translation with hybrid word-character models",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1054--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong and Christopher D. Manning. 2016. Achieving open vocabulary neural machine transla- tion with hybrid word-character models. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1054-1063.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Edinburgh neural machine translation systems for wmt 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "371--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation sys- tems for wmt 16. In Proceedings of the First Confer- ence on Machine Translation. Association for Com- putational Linguistics, Berlin, Germany, pages 371- 376.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics. Association for Computational Linguis- tics, pages 1715-1725.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Azalia",
"middle": [],
"last": "Mirhoseini",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Maziarz",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In Pro- ceedings of International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of International Confer- ence on Learning Representations (ICLR).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "\u0141",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Riesa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rudnick",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, \u0141. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. 2016. Google's Neural Machine Translation Sys- tem: Bridging the Gap between Human and Ma- chine Translation. ArXiv e-prints .",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics, pages 1568-1575.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "An illustration of the proposed architecture of the ULR and MoLE. Shaded parts are trained within NMT model while unshaded parts are not changed during training.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "BLEU score vs unknown tokens",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Three sets of examples on Ro-En translation with variant settings.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "The activation visualization of mixture of language experts module on one randomly selected Ro source sentences trained together with different auxiliary languages. Darker color means higher activation score.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Performance comparison of Fine-tuning on 6K RO sentences.",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"num": null,
"text": "Statistics of the available parallel resource in our experiments. All the languages are translated to English.",
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"num": null,
"text": "",
"type_str": "table"
}
}
}
}