ACL-OCL / Base_JSON /prefixN /json /nodalida /2021.nodalida-main.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:31:54.886459Z"
},
"title": "Extremely low-resource machine translation for closely related languages",
"authors": [
{
"first": "Maali",
"middle": [],
"last": "Tars",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Andre",
"middle": [],
"last": "T\u00e4ttar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Fi\u0161el",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tartu",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An effective method to improve extremely low-resource neural machine translation is multilingual training, which can be improved by leveraging monolingual data to create synthetic bilingual corpora using the back-translation method. This work focuses on closely related languages from the Uralic language family: from Estonian and Finnish geographical regions. We find that multilingual learning and synthetic corpora increase the translation quality in every language pair for which we have data. We show that transfer learning and fine-tuning are very effective for doing low-resource machine translation and achieve the best results. We collected new parallel data for V\u00f5ro, North and South Saami and present first results of neural machine translation for these languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "An effective method to improve extremely low-resource neural machine translation is multilingual training, which can be improved by leveraging monolingual data to create synthetic bilingual corpora using the back-translation method. This work focuses on closely related languages from the Uralic language family: from Estonian and Finnish geographical regions. We find that multilingual learning and synthetic corpora increase the translation quality in every language pair for which we have data. We show that transfer learning and fine-tuning are very effective for doing low-resource machine translation and achieve the best results. We collected new parallel data for V\u00f5ro, North and South Saami and present first results of neural machine translation for these languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT, Vaswani et al., 2017) shows great results in terms of output fluency and overall translation quality, however it relies on large parallel corpora for training the models. Low-resource NMT techniques like backtranslation , multilingual knowledge transfer (Johnson et al., 2017; Ngo et al., 2020) and unsupervised NMT (Lample et al., 2018) rely on using parallel corpora for other languages and/or large quantities of monolingual data for the language(s) of interest.",
"cite_spans": [
{
"start": 27,
"end": 54,
"text": "(NMT, Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 287,
"end": 309,
"text": "(Johnson et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 310,
"end": 327,
"text": "Ngo et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 349,
"end": 370,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we put these techniques to the test in an extremely low-resource setting, working on NMT systems for V\u00f5ro-Estonian. While Estonian has plentiful parallel, monolingual and annotated corpora (Tiedemann, 2016; Nivre et al., 2020, etc) , V\u00f5ro with its 87 000 speakers and no normalized orthography only has slightly over 162 000 monolingual sentences with much less parallel data.",
"cite_spans": [
{
"start": 194,
"end": 211,
"text": "(Tiedemann, 2016;",
"ref_id": "BIBREF22"
},
{
"start": 212,
"end": 236,
"text": "Nivre et al., 2020, etc)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we resort to the help of languages closely related to V\u00f5ro and Estonian: the resource-rich Finnish and two more extremely low-resource North and South Saami. We combine multilingual transfer learning, back-translation and then evaluate several combinations of these techniques on NMT for the five chosen Uralic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this paper are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 experimental results for combinations of techniques for low-resource NMT with application to closely related resource-poor Uralic languages",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 first developed NMT systems for V\u00f5ro, North and South Saami languages with a free online demo 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 additional data collected for V\u00f5ro, North and South Saami",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Next we review related work in Section 2, describe our experimental setup in Section 3, then proceed with results in Section 4 and conclude the paper in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes prior work in machine translation (MT) with neural networks for lowresource related languages. Our work on neural machine translation relies on (Vaswani et al., 2017) , who introduce transformer, an encoderdecoder type of solution for MT based on selfattention.",
"cite_spans": [
{
"start": 167,
"end": 189,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "There has been a lot of research into low-resource MT, for example, phrase-based unsupervised and semi-supervised MT (Lample et al., 2018; Artetxe et al., 2018) , but they relied on lexicons or large quantities of monolingual data. Their work is not easily applicable for our experiments because the amount of monolingual data is not sufficient, having less than 100K sentences for most of the languages in our data sets. The authors in (H\u00e4m\u00e4l\u00e4inen and Alnajjar, 2019) used a template based approach to generate more parallel data for related languages, which made NMT models viable for training.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Lample et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 139,
"end": 160,
"text": "Artetxe et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 437,
"end": 468,
"text": "(H\u00e4m\u00e4l\u00e4inen and Alnajjar, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource NMT",
"sec_num": "2.1"
},
{
"text": "Another way of doing multilingual NMT is via zero-shot translations for very low resource language pairs. In our case, we have data for ten translation directions and zero parallel data for the rest of the ten directions. In (Gu et al., 2018) the authors showed that zero-shot translations achieve better results than the pivoting approach -pivoting means that when we have a language pair with sufficient data, then V\u00f5ro to Finnish translation, which has zero data, would use the Estonian language to pivot -V\u00f5ro to Estonian to Finnish translation. We want to avoid pivoting because V\u00f5ro to North Saami would require two pivots or three translations in total, resulting in serious error propagation. Additionally, the authors use shared source embeddings and source RNN encoders; we used transformers with shared vocabulary, encoders and decoders. In (Rikters et al., 2018) the authors showed that multilingual training with transformers is optimal for multilingual Estonian-English-Russian system, but reported that high-resource pairs see a performance degradation and lower-resourced pairs see a performance increase.",
"cite_spans": [
{
"start": 225,
"end": 242,
"text": "(Gu et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 852,
"end": 874,
"text": "(Rikters et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Low-resource NMT",
"sec_num": "2.1"
},
{
"text": "Every sentence is essential for neural machine translation in a low-resource machine translation environment. One popular way to leverage monolingual data is by creating a synthetic corpus via a method called back-translation (BT). Traditional BT ) is easy to use and requires training a target-to-source MT system to generate translations of the monolingual data, which are used as training data for the source-totarget MT model. This means that traditional BT requires two NMT models, where one generates synthetic data for the other. The idea behind BT is that the monolingual human data on the target side improves the quality of the decoder to generate better output for the language and the synthetic source helps as a data augmentation tactic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation for low-resource MT",
"sec_num": "2.2"
},
{
"text": "Closely related to back-translation is a method called forward-translation (FT), where the model creates synthetic parallel data for itself -the source sentence is translated into the target language, and together, a bitext sample is created. In other words, forward-translation is called self-training. The authors in (Popovi\u0107 et al., 2020) used both BT and FT for closely related languages. They used a multilingual encoder (English and German) and a multilingual decoder (Serbian and Croatian) and achieved better results compared to single directional baselines in their experiments.",
"cite_spans": [
{
"start": 319,
"end": 341,
"text": "(Popovi\u0107 et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation for low-resource MT",
"sec_num": "2.2"
},
{
"text": "Our work is about a single multilingual system that enables the model to generate synthetic data for itself -both back-translation and forwardtranslation is used. The generated synthetic data is added to available parallel corpora as training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation for low-resource MT",
"sec_num": "2.2"
},
{
"text": "The authors in (Kocmi and Bojar, 2018) did trivial transfer learning for low resource NMT -in detail, they used a high resource language pair like English-Finnish to train a parent model. They continued training on a lower resource child model English-Estonian and showed that this improved translation quality significantly, 19.74 BLEU score compared to 17.03 when using only English to Estonian data. Additionally, they showed that \"unrelated\" languages might work even better, where the best English-Estonian results were achieved by using an English-Czech as a parent, which achieved a 20.41 BLEU score on the same test set. Their work shows that transfer learning is a very viable option for low-resource NMT. The only drawback is that their work still relies on some amount of data and a common source or target language to either share the encoder or decoder weights. In our case, there are language pairs, which have 0 available sentences like V\u00f5ro to North Saami. Additionally, their work would require 20 such models to be trained.",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "(Kocmi and Bojar, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning and fine-tuning",
"sec_num": "2.3"
},
{
"text": "The authors in (Currey and Heafield, 2019; Zhang et al., 2020) show that using multilingual back-translation for fine-tuning a multilingual model is beneficial for translation quality. Additionally, (Zhang et al., 2020) shows that their random online back-translation lowers the chance of the model doing off-target translations, which in our case is also a problem since the model never sees some language pairs. We build upon this work by doing two iterations of fine-tuning on a synthetic back-translation corpora, where we uniformly at random assign the target language into which to translate.",
"cite_spans": [
{
"start": 15,
"end": 42,
"text": "(Currey and Heafield, 2019;",
"ref_id": "BIBREF2"
},
{
"start": 43,
"end": 62,
"text": "Zhang et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 199,
"end": 219,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning and fine-tuning",
"sec_num": "2.3"
},
{
"text": "The difference between transfer learning and fine-tuning is small. We refer to transfer learning when the MT model is trained on some languages that the model has never seen before, e.g., when using ET-FI model weights to initialize the ET-VRO model. We refer to fine-tuning when we continue training a multilingual model on data, which the model has seen before, e.g., when using the multilingual model to fine-tune on ET-VRO data only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning and fine-tuning",
"sec_num": "2.3"
},
{
"text": "We rely on the work of (Sennrich and Haddow, 2016) for zero-shot translations in our multilingual models; in that article, the authors use morphological features like POS tags to enrich sourceside representations. We use source-side factors to give the transformer model information about the intended target language, so the model knows which language the output should be in. The authors in (Tars and Fishel, 2018) used sourcefactors to give domain and target language information for the model. Using source factors is similar to using a single token on the input sentence to distinguish between closely related languages and dialects (Lakew et al., 2018; Costa-juss\u00e0 et al., 2018) , where authors show an improvement over a single baseline model when training a model for similar languages.",
"cite_spans": [
{
"start": 393,
"end": 416,
"text": "(Tars and Fishel, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 638,
"end": 658,
"text": "(Lakew et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 659,
"end": 684,
"text": "Costa-juss\u00e0 et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Source factors for multilingual zero-shot NMT",
"sec_num": "2.4"
},
{
"text": "3 Experimental setup 3.1 Data sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source factors for multilingual zero-shot NMT",
"sec_num": "2.4"
},
{
"text": "The data for the experiments originated from many different sources. Subsequently, the main issue with the parallel data collected was the differences in file formats, which took a long time to solve in order to create a unified data set. The biggest problem with parallel data was that there were a lot of repeated sentence pairs in the data, which required a uniqueness check and reduced the number of sentence pairs for the Finnish-North Saami (FI-SME) language pair by about 75 percent, as seen in Table 1 . Preprocessing monolingual data was also a long process as there were no conclusive ready-made sets available for languages like V\u00f5ro, North Saami and South Saami. As described in Table 2 , in the first set, the data consisted mostly of news corpuses, fiction and Wikipedia texts. The data files were in different formats, as was the case with parallel data. Estonian and V\u00f5ro required extracting sentences from texts and removing empty lines. The V\u00f5ro, North Saami and South Saami data in the second set was gathered manually from news articles and various PDF style documents (fiction, scientific texts, official documents) available. The paragraphs of text then needed to be divided into sentences and joined into one TXT type file for compatibility. Additional preprocessing included fixing some minor alignment issues.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 1",
"ref_id": null
},
{
"start": 691,
"end": 698,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "3.1.1"
},
{
"text": "Validation and test sets consisted of sentences from all the five language pairs mentioned in Table 1. The number of sentences for each language pair was chosen proportionally to the amount of training data the pair had. In total, there were 1862 test sentences and 939 validation sentences. There is no official test set available for these language pairs collectively, and as parallel data was scarce, the validation and test sets were sentences that were randomly held-out of the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Validation and test data",
"sec_num": "3.1.2"
},
{
"text": "Table 1 also highlights the fact that Estonian-Finnish (ET-FI) acted as the high-resource language pair in the experiments, with 2.6 million sentence pairs available. Other language pairs formed a small fraction of the whole parallel data set, with about 1 percent. The lowest amount of data was discovered for Finnish-South Saami (FI-SMA) language pair, with under 3000 sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel data",
"sec_num": "3.1.3"
},
{
"text": "As expected, Estonian and Finnish had the most monolingual data available. Although finding data sets for the low-resource languages proved to be more difficult, there was more of it available than parallel data for their respective language pairs used in this work. The two sets of monolingual data described in Table 2 were collected separately. Experiments with the first set were already performed prior to gathering the second set, which is why the amounts of two sets are offbalance. For the sake of the models learning more about low-resource languages, we used the downsampling technique, reducing the amount of Esto-",
"cite_spans": [],
"ref_spans": [
{
"start": 313,
"end": 320,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Monolingual data",
"sec_num": "3.1.4"
},
{
"text": "Before cleaning After cleaning Eliminated et-fi (Tiedemann, 2016) 3 566 826 2 646 922 919 904 et-vro 2 30 816 30 502 314 fi-sme 3 109 852 35 426 74 426 fi-sma 3 3098 2895 203 sme-sma 3 23 746 21 557 2189 Overall 3 734 338 2 737 302 997 036 Table 1 : Parallel data sets (in sentence pairs). et -Estonian, fi -Finnish, vro -V\u00f5ro, sme -North Saami, sma -South Saami.",
"cite_spans": [
{
"start": 48,
"end": 65,
"text": "(Tiedemann, 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 68,
"end": 271,
"text": "566 826 2 646 922 919 904 et-vro 2 30 816 30 502 314 fi-sme 3 109 852 35 426 74 426 fi-sma 3 3098 2895 203 sme-sma 3 23 746 21 557 2189 Overall 3 734 338 2 737 302 997 036 Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Language pair",
"sec_num": null
},
{
"text": "nian and Finnish monolingual data to level them with the amount of low-resource language monolingual data in use. That made V\u00f5ro (VRO) the most prominent language in the data set, as shown in Table 2 . The amount of data for North and South Saami was still quite low, but it was an improvement over the parallel data set numbers.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 199,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Language pair",
"sec_num": null
},
{
"text": "In our experiments we use the Sockeye framework described by (Hieber et al., 2017) , which has implemented source-side factors where we give the target language token as an input feature for the transformer model. During training, the vocabulary that was created included all of the languages. Specifications of the training process included setting the batch size to 6000 words and checkpoint interval to 2000. All the models in the experiments trained until 32 consecutive unimproved checkpoints were reached. The unimproved metric was perplexity. All of the experiments use the standard transformer parameters (6 encoder and 6 decoder layers with 8 attention heads and size 512). Prior to training, all of the data used to develop the models was tokenized by a SentencePiece (Kudo and Richardson, 2018) tokenization model, which follows the byte-pair encoding algorithm. The tokenization model was previously trained on all of the training data.",
"cite_spans": [
{
"start": 61,
"end": 82,
"text": "(Hieber et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 778,
"end": 805,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General settings",
"sec_num": "3.2.1"
},
{
"text": "2 https://doi.org/10.15155/ 1-00-0000-0000-0000-001A0L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General settings",
"sec_num": "3.2.1"
},
{
"text": "3 https://giellalt.uit.no/tm/ TranslationMemory.html 4 https://www.cl.ut.ee/korpused/ segakorpus/epl/ 5 https://doi.org/10.15155/ 1-00-0000-0000-0000-00186L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General settings",
"sec_num": "3.2.1"
},
{
"text": "6 https://github.com/maalitars/ FinnoUgricData 7 http://hdl.handle.net/11509/102",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General settings",
"sec_num": "3.2.1"
},
{
"text": "One of the fundamental experiments of this work was developing the multilingual baseline model, which had five source languages and five target languages. This means that this model could produce translations in 20 different directions. For this, each pair of parallel data seen in Table 1 was copied and the source-target direction was switched. The turned-around parallel data set was then added to the original data set and the multilingual baseline model was trained on all of the combined parallel data. The data set was tokenized by a tokenization model, which was trained on all the training data from the parallel data set, meaning text patterns were generalized over five languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 289,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual baseline",
"sec_num": "3.2.2"
},
{
"text": "Synthetic parallel data via back-translation was produced in two iterations and additional models were also trained in two iterations. The monolingual data was translated into every other language in equal measures. For example, 1/4 of the 100 000 sentences in Estonian were translated into Finnish, 1/4 into V\u00f5ro, 1/4 into North Saami and 1/4 into South Saami. The paired-up synthetic translations and monolingual data made up the additional parallel data corpus. Combining the new synthetic parallel data corpus and the original, human-translated corpus, gives the models more parallel data to learn on during training. The methodology of both backtranslation data experiment iterations was the same, but there were some important aspects that were different: First iteration",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation experiments",
"sec_num": "3.2.3"
},
{
"text": "\u2022 Monolingual data used: first monolingual data set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation experiments",
"sec_num": "3.2.3"
},
{
"text": "\u2022 The first batch of synthetic data was produced with the multilingual baseline model. The Language First set Second set All et 4 100 000 25 000 125 000 fi (Goldhahn et al., 2012) 100 000 25 000 125 000 vro 5 , 6 162 807 5290 168 097 sme (Goldhahn et al., 2012; Tiedemann, 2012) , 6 33 964 6057 40 021 sma 6 , 7 (Tiedemann, 2012) , 55 088 5377 60 465 BT -back-translation data set, FT -forward-translation data set, BLEU low -average BLEU score on low-resource language pairs (excluding ET-FI and FI-ET), bold -best BLEU score for a language pair.",
"cite_spans": [
{
"start": 128,
"end": 129,
"text": "4",
"ref_id": null
},
{
"start": 156,
"end": 179,
"text": "(Goldhahn et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 211,
"end": 212,
"text": "6",
"ref_id": null
},
{
"start": 238,
"end": 261,
"text": "(Goldhahn et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 262,
"end": 278,
"text": "Tiedemann, 2012)",
"ref_id": "BIBREF23"
},
{
"start": 281,
"end": 282,
"text": "6",
"ref_id": null
},
{
"start": 312,
"end": 329,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation experiments",
"sec_num": "3.2.3"
},
{
"text": "synthetic data was then added to the original parallel data and the training process was repeated, which produced a new model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-translation experiments",
"sec_num": "3.2.3"
},
{
"text": "\u2022 Monolingual data used in this iteration consisted of 1) shuffled first monolingual data set, 2) second monolingual data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second iteration",
"sec_num": null
},
{
"text": "\u2022 Monolingual data was translated by the newest model that had been trained on parallel data and synthetic data from the first iteration of back-translation (+BT1 in Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 166,
"end": 173,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Second iteration",
"sec_num": null
},
{
"text": "Subsequently, a new model was trained using original parallel data plus the two batches of synthetic data produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Second iteration",
"sec_num": null
},
{
"text": "Additional experiments included having different combinations of back-translation/forwardtranslation data and differences in initialized weights, with the best of them presented in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Second iteration",
"sec_num": null
},
{
"text": "We performed an experiment fine-tuning the multilingual baseline model on ET-VRO parallel data and a transfer learning experiment, initializing ET-VRO model with ET-FI baseline model weights. Then we compared the results of these two experiments to each other and to the ET-VRO baseline model. The ET-VRO data was the same parallel data that was used for training the multilingual baseline model (ML).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer learning experiments",
"sec_num": "3.2.4"
},
{
"text": "Quantitative results were determined by comparing BLEU scores (Papineni et al., 2002) , using the SacreBLEU implementation (Post, 2018) of calculating the score on detokenized sentences 8 . Multiple experiments were assessed and the best experiments are explained in Table 3 . Additional analysis was done with the CHRF metric, which compares sentences on a character-level (Popovi\u0107, 2015) . We used the SacreBLEU implementation (Post, 2018) of the CHRF metric 9 and the results can be seen in the Appendix in Table 7 .",
"cite_spans": [
{
"start": 62,
"end": 85,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
},
{
"start": 123,
"end": 135,
"text": "(Post, 2018)",
"ref_id": "BIBREF17"
},
{
"start": 374,
"end": 389,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 429,
"end": 441,
"text": "(Post, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 3",
"ref_id": "TABREF1"
},
{
"start": 510,
"end": 517,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantitative analysis",
"sec_num": "4.1"
},
{
"text": "Multilingual baseline. All of the low-resource language pairs experienced a positive gain over baseline model results in comparison to the multilingual baseline model (ML) experiment. An average gain of 7.6 BLEU was achieved on the lowresource language pairs with VRO-ET and SME-SMA exceeding this average gain by an additional 4 BLEU points. Noticeably, FI-SMA made the smallest improvement, perhaps the main reason for this lies in FI-SMA having significantly less parallel data than other low-resource language pairs. Back-translation experiments. Experiments with data from back-translation iterations further improved the BLEU score for low-resource language pairs compared to the multilingual baseline model. None of the models showed uniform improvements across all of the low-resource language pairs, however we can highlight one model with the highest average gain over baseline results, improving by +9.2 BLEU points. This model was trained on parallel data plus two batches of backtranslation data but without any initialized weights (+ BT1 + BT2(*) in Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 1064,
"end": 1071,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "BLEU",
"sec_num": "4.1.1"
},
{
"text": "While the pre-trained weights did not seem to help produce the best models with parallel and back-translation data, the experiments with only back-translation data show that initializing a model with useful pre-trained weights can still be very helpful in the case of related tasks. This is illustrated by models BT1 and BT1(*) with 7.1 BLEU points between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU",
"sec_num": "4.1.1"
},
{
"text": "Experiments with added forward-translations did not appear to improve the results except for the VRO-ET language pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU",
"sec_num": "4.1.1"
},
{
"text": "Transfer learning experiments. Transfer learning and fine-tuning a model for a particular language pair results in further improvements over the best back-translation model results. In this part, we performed two experiments. In the transfer learning experiment, we trained an ET-FI baseline model until convergence; then the training data was changed to the ET-VRO data set, which was used for training until convergence. In the second experiment, we fine-tuned the multilingual baseline model with the ET-VRO language direction data only. Comparing BLEU results in low-resource NMT is very beneficial -a 12 BLEU point increase is achieved by doing trivial transfer learning, and even better gains are seen in the multilingual fine-tuning experiment with a 13 point BLEU score increase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU",
"sec_num": "4.1.1"
},
{
"text": "Thus, the best results were achieved in the transfer learning and the fine-tuning experiment. Transfer learning alone, however, has a downside. Compared to multilingual models, which can translate in 20 different directions, in case of transfer learning, to achieve the same functionality, 20 separate models would have to be trained, which takes up a lot more resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BLEU",
"sec_num": "4.1.1"
},
{
"text": "For the low-resource language pairs, the CHRF score metric mostly agreed with the BLEU score metric on which model gives the best results for each language pair, except for SMA-FI and SMA-SME. This can be seen in the Appendix in Table 7 . With the CHRF score, however, it is much clearer that the model + BT1 + BT2(*) is the best one out of all the experiments done with back-translations, because both BLEU low and CHRF low had the best scores on test data with this model and six out of the eight low-resource language pairs achieved the highest CHRF scores. In Table 5 , for transfer learning and fine-tuning experiments, the BLEU and CHRF scores moved in the same direction, transfer learning and fine-tuning improving results substantially.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 7",
"ref_id": null
},
{
"start": 564,
"end": 571,
"text": "Table 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "CHRF",
"sec_num": "4.1.2"
},
{
"text": "Overall, we can see the same patterns, both in the BLEU and the CHRF score analysis: the multilingual model concept helps get better translation quality for low-resource languages compared to baseline results; adding more and more back-translated data to the training data increases the scores; adding forward-translations, however, mostly lowers the scores. Another noticeable thing shown by both of the scores, is that backtranslation on its own, when looking at the models BT1 and BT1(*), does not achieve good (and comparable) results. One possible reason for this is the data domain mismatch between test data (parallel data hold-out) and monolingual data. A balanced test set for these languages could provide a better overview of the results and give more accurate info on the quality of the models. On coming up with a new name, it was important that there was a clear reference to the local community and that the name would help tell the story of the business. b) Source Parhilla ommaq jutuq hindamiskogo k\u00e4en, kokkov\u00f5t\u00f5q ja preemi\u00e4saajaq tr\u00fckit\u00e4seq\u00e4rq j\u00e4rgm\u00e4dsen Uman Lehen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CHRF",
"sec_num": "4.1.2"
},
{
"text": "Praegu on instruktsioone, ka kokkuv\u00f5tted, selliste s\u00fcndmuste ja aegajalt \" sisse l\u00fclitada\" kaugemate Leivalentsemad. ML Praegu on jutud hindamiskogu k\u00e4es, kokkuv\u00f5tted ja preemiasaajad tr\u00fckivad j\u00e4rgmise Uman Leheni. +BT1&2+FT1&2(*) Praegu on jutud hindamiskogu k\u00e4es , kokkuv\u00f5tted ja preemiasaajad tr\u00fckivad\u00e4ra j\u00e4rgmises Uman Lehes .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "Praegu on jutud hindamiskomisjoni k\u00e4es, kokkuv\u00f5te ja preemiasaajad tr\u00fckitakse\u00e4ra j\u00e4rgmises Uma Lehes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "At the moment the stories are with the judging committee, the summary and the winners will be printed in the next Uma Leht. c) Source Nuoria on tullut tilalle aika lailla, ehk\u00e4 ottaa er\u00e4s nuori kyl\u00e4el\u00e4m\u00e4n vet\u00e4misen haltuunsa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "Nuorat leat boaht\u00e1n sadj\u00e1i\u00e1igi, soadi, k\u00e1ntorin jos\u010dadahat gilvun. ML Nuorat leat boaht\u00e1n sadj\u00e1i\u00e1iggi l\u00e1dje, soait\u00e1 v\u00e1ldit ovtta nuorra gilieallima jodiheami.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": null
},
{
"text": "Nuorat leat boaht\u00e1n sadj\u00e1i\u00e1ige l\u00e1dje, soait\u00e1 v\u00e1ldit ovtta nuorra gilieallima jodiheami h\u00e1ldui.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "+BT1",
"sec_num": null
},
{
"text": "Nuorat lea boaht\u00e1n lasi oallel\u00e1hk\u00e1i, g\u00e1nske muhtun nuorra v\u00e1ld\u00e1 gilieallima geassima ie\u017eas h\u00e1ldui.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "There are quite a bit more younger people now, maybe one of them will take over the role of leading the village life. Table 5 : BLEU and CHRF scores for transfer learning and fine-tuning experiments. Table 4 compares some sentence translations for ET-VRO, VRO-ET and FI-SME language pairs. It is clear that baseline models produced subpar translations, rendering them non-sensical. The multilingual baseline model improved the translations significantly, although still making some detrimental mistakes, like choosing the wrong word, so the meaning is lost, or deciding not to translate some parts of the sentence. Adding backtranslation data to the models fixed some of these mistakes and made some important changes in understanding the meaning of a sentence, but the best back-translation models still left in some grammatical errors, such as wrong verb forms, grammatical cases and tenses. In the first example, translating in the ET-VRO direction, the best model chooses a direct translation of \"v\u00e4ljam\u00f5tlemisel\" to \"v\u00e4ll\u00e4m\u00f5t\u00f5ld\u00f5n\". In addition, all of the models omit the \"q\" endings of the words \"avitanuq\" and \"jutustaq\" or \"k\u00f5n\u00f5ldaq\". This symbol usually signifies plurality and, in a lot of cases upon translating in the ET-VRO direction, the models chose not to add the \"q\", although it would have been correct. The problem could lie in the data, where the \"q\" endings are also not always added, which in turn could confuse the models.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 5",
"ref_id": "TABREF2"
},
{
"start": 200,
"end": 207,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "The second example illustrates a VRO-ET direction translation, which presents some bigger flaws. For example, handling names is difficult even for the best model, with \"Uman Lehen\" being translated incorrectly to \"Uman Lehes\". The grammatical case of the word \"Lehen\" was correctly changed to have an \"s\" ending, but the case of the word \"Uman\" was not changed. In addition, \"tr\u00fckit\u00e4seq\" is translated to the wrong verb form \"-vad\", but it should be replaced with the impersonal form \"-takse\". Continuing with the problems caused by the \"q\" ending, here the word \"kokkov\u00f5t\u00f5q\" is translated to plurality \"kokkuv\u00f5tted\", but in this particular case, it should be translated to the singular form \"kokkuv\u00f5te\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "4.2"
},
{
"text": "The third example shows the FI-SME language pair translations. Here the meaning of the sentence is understandable, but the word-pair \"\u00e1ige l\u00e1dje\" is a direct translation from the Finnish phrase \"aika lailla\", which means \"quite a bit\" in English, but \"\u00e1ige l\u00e1dje\" does not hold the same meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "4.2"
},
{
"text": "Additional examples can be seen in the Appendix in Table 6 . In these examples, there is another flaw presented, which might be unique to multilingual models, where some words in a sentence are translated into the wrong language, although they might have the correct meaning. This is illustrated very well in the third ET-VRO translation example in Table 6 . All models, except the baseline, choose to translate the word \"ametlikult\" into the Finnish word \"virallisesti\", instead of trying to find a word for it in V\u00f5ro language.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 6",
"ref_id": "TABREF6"
},
{
"start": 349,
"end": 356,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Qualitative analysis",
"sec_num": "4.2"
},
{
"text": "The results show that synthetic data helps to learn a better model; however, the model which has continued training on only back-translation data sees performance degradation. This is most likely caused by the test sets' domain mismatch problem and is alleviated by merging the parallel and synthetic data into one big corpus. This shows that a new separate test set should be created for this problem, but it is very hard to do as there are very few speakers. We have started to gather a multilingual five-way test corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "We think that this work can be further improved by doing better multilingual fine-tuning, shown by promising multilingual fine-tuning experiments, where the best result was 27.6 BLEU points compared to the best multilingual model, which achieved 26.2 BLEU points. The research question remains, how well would fine-tuning work for a completely synthetic parallel corpus like VRO-SMA. We did not explore this yet due to not having enough resources. Also, it is unknown what effect this single language pair finetuning would have on other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "Multilingual neural machine translation with shared encoders and decoders work very well for very low resource language translation. Using back-translation for low resource MT is vital for best results, further improved by transfer learning and fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "In the future, we hope to work with more Uralic languages and add an unrelated high-resource language, for example German. Secondly, we want to do better multilingual fine-tuning since the best ET-VRO score of 27.6 was reached by multilingual fine-tuning, compared to 26.2 for multilingual training. Finally, we hope to find more parallel and monolingual data. Odda nama huksemis lei deh\u00e1la\u0161, ahte liv\u010d\u010dii\u010dielga oktavuohta b\u00e1ikk\u00e1la\u0161 servodahkii ja ahte namma veahkehiv\u010d\u010dii muitalit fitnodaga m\u00e1idnasa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "5"
},
{
"text": "Odda nama hutkkadettiin lea deh\u00e1la\u0161, ahte liv\u010d\u010de\u010dielga oktavuohta b\u00e1ikk\u00e1la\u0161 servvodahkii ja ahte namma veahkehiv\u010d\u010de muitalit fitnodaga muitalusa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "Kansa kokoontuu entiseen koulutaloon, jossa on my\u00f6s kirjasto. Baseline Riikkabeaivevahku\u010doahkkana s\u00e1gadoalliriikkas, mas leat maid girjer\u00e1jus. ML\u00c1lbmoga\u010doahkkana ovdde\u0161 skuvllas, mas lea maid girjer\u00e1dju. +BT1\u00c1lbmot\u010doahkkana ovdde\u0161 skuvlad\u00e1ss\u00e1i, mas lea maid girjer\u00e1dju. Reference\u00c1lbmot\u010doahkkana boares skuvlavist\u00e1i, mas lea maidd\u00e1i girjer\u00e1dju.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Sosiaalisessa mediassa pitiv\u00e4t ihmiset eniten teht\u00e4v\u00e4st\u00e4 \"Puhu tai postaa yksi vitsi tai tarina v\u00f5ron kielell\u00e4\". Baseline Sosi\u00e1la medias atne olbmot eanemus barggus \" Ominayak oktavuodav\u00e1ldimiid dehe m\u00e1idnuma\u00f5jjst\u00f5\u00f5ll\u00e2m\u01e9erjj lea oaivvilduvvon. ML Sosi\u00e1la medias doalai olbmuid eanemus bargguin \"Puhu dahje poasta okta nja dahje m\u00e1idnasa gillii.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Sosi\u00e1la medias dolle olbmot eanemusat bargguin \"Puhu dahje poasta okta nja\u0161 dahje m\u00e1idnasa vuonagillii\". Reference Sosi\u00e1la medias liikojedje olbmot maidd\u00e1i bargobiht\u00e1s \"Muital dahje postte ovtta cukcasa dahje m\u00e1idnasa v\u00f5ro gillii.\" Table 7 : CHRF scores. (*) -trained without pre-trained weights, (**) -trained on + BT1(*) weights. BT -back-translation data set, FT -forward-translation data set, CHRF low -average CHRF score on low-resource language pairs (excluding ET-FI and FI-ET), bold -best CHRF score for a language pair.",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "+BT1",
"sec_num": null
},
{
"text": "https://soome-ugri.neurotolge.ee/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised statistical machine translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1399"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised statistical machine transla- tion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 3632-3642, Brussels, Belgium. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Neural Approach to Language Variety Translation",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Marta R Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Santanu",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifth Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "275--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta R Costa-juss\u00e0, Marcos Zampieri, and Santanu Pal. 2018. A Neural Approach to Language Vari- ety Translation. In Proceedings of the Fifth Work- shop on NLP for Similar Languages, Varieties and Dialects (VarDial 2018), pages 275-282, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Zeroresource neural machine translation with monolingual pivot data",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "99--107",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5610"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Currey and Kenneth Heafield. 2019. Zero- resource neural machine translation with monolin- gual pivot data. In Proceedings of the 3rd Work- shop on Neural Generation and Translation, pages 99-107, Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Goldhahn",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Eckart",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Quasthoff",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Goldhahn, T. Eckart, and U. Quasthoff. 2012. Building large monolingual dictionaries at the leipzig corpora collection: From 100 to 200 lan- guages. In Proceedings of the 8th International Lan- guage Resources and Evaluation (LREC'12).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal neural machine translation for extremely low resource languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1032"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018. Universal neural machine translation for extremely low resource languages. In Proceed- ings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 344-354, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A template based approach for training nmt for lowresource uralic languages -a pilot with finnish",
"authors": [
{
"first": "Mika",
"middle": [],
"last": "H\u00e4m\u00e4l\u00e4inen",
"suffix": ""
},
{
"first": "Khalid",
"middle": [],
"last": "Alnajjar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2019",
"volume": "",
"issue": "",
"pages": "520--525",
"other_ids": {
"DOI": [
"10.1145/3377713.3377801"
]
},
"num": null,
"urls": [],
"raw_text": "Mika H\u00e4m\u00e4l\u00e4inen and Khalid Alnajjar. 2019. A template based approach for training nmt for low- resource uralic languages -a pilot with finnish. In Proceedings of the 2019 2nd International Confer- ence on Algorithms, Computing and Artificial Intel- ligence, ACAI 2019, page 520-525, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Trivial transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "244--252",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6325"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial trans- fer learning for low-resource neural machine trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244- 252, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentence-Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural Machine Translation into Language Varieties",
"authors": [
{
"first": "Aliia",
"middle": [],
"last": "Surafel Melaku Lakew",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Erofeeva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "156--164",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6316"
]
},
"num": null,
"urls": [],
"raw_text": "Surafel Melaku Lakew, Aliia Erofeeva, and Marcello Federico. 2018. Neural Machine Translation into Language Varieties. In Proceedings of the Third Conference on Machine Translation: Research Pa- pers, pages 156-164, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1549"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 5039-5049, Brussels, Belgium. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving multilingual neural machine translation for low-resource languages: French, English -Vietnamese",
"authors": [
{
"first": "Thi-Vinh",
"middle": [],
"last": "Ngo",
"suffix": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Khac-Quy",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages",
"volume": "",
"issue": "",
"pages": "55--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thi-Vinh Ngo, Phuong-Thai Nguyen, Thanh-Le Ha, Khac-Quy Dinh, and Le-Minh Nguyen. 2020. Im- proving multilingual neural machine translation for low-resource languages: French, English -Viet- namese. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 55-61, Suzhou, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Universal Dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural Machine Translation for translating into Croatian and Serbian",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Marija",
"middle": [],
"last": "Brkic",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "102--113",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107, Alberto Poncelas, Marija Brkic, and Andy Way. 2020. Neural Machine Translation for translating into Croatian and Serbian. In Pro- ceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects, pages 102-113, Barcelona, Spain (Online). International Committee on Computational Linguistics (ICCL).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Training and Adapting Multilingual NMT for Less-resourced and Morphologically Rich Languages",
"authors": [
{
"first": "M\u0101rcis",
"middle": [],
"last": "Mat\u00afrikters",
"suffix": ""
},
{
"first": "Rihards",
"middle": [],
"last": "Pinnis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kri\u0161lauks",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mat\u00afRikters, M\u0101rcis Pinnis, and Rihards Kri\u0161lauks. 2018. Training and Adapting Multilingual NMT for Less-resourced and Morphologically Rich Lan- guages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "83--91",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2209"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83- 91, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-domain neural machine translation",
"authors": [
{
"first": "Sander",
"middle": [],
"last": "Tars",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2018,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Tars and M. Fishel. 2018. Multi-domain neural machine translation. ArXiv, abs/1805.02282.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "OPUS -parallel corpora for everyone",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2016. OPUS -parallel corpora for everyone. In Proceedings of the 19th Annual Con- ference of the European Association for Machine Translation: Projects/Products, Riga, Latvia. Baltic Journal of Modern Computing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving massively multilingual neural machine translation and zero-shot translation",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1628--1639",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.148"
]
},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Philip Williams, Ivan Titov, and Rico Sen- nrich. 2020. Improving massively multilingual neu- ral machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1628- 1639, Online. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Baselines</td><td colspan=\"3\">32.0 29.4 14.6</td><td>17.5</td><td>28.0</td><td>28.7</td><td>4.6</td><td>6.3</td><td>8.3</td><td>9.1</td><td>14.6</td></tr><tr><td>Multilingual (ML)</td><td colspan=\"3\">30.9 29.5 23.8</td><td>29.6</td><td>31.3</td><td>34.7</td><td>9.4</td><td>9.4</td><td>19.8</td><td>19.8</td><td>22.2</td></tr><tr><td>+ BT1</td><td colspan=\"3\">32.4 29.9 25.2</td><td>29.4</td><td>32.3</td><td>36.1</td><td>10.8</td><td>9.9</td><td>20.3</td><td>20.0</td><td>23.0</td></tr><tr><td>+ BT1(*)</td><td colspan=\"3\">30.1 29.1 24.5</td><td>30.3</td><td>32.3</td><td>36.2</td><td>11.1</td><td>10.5</td><td>21.4</td><td>20.0</td><td>23.3</td></tr><tr><td>+ BT1 + FT1</td><td colspan=\"3\">31.3 30.1 25.2</td><td>31.5</td><td>31.3</td><td>35.7</td><td>8.9</td><td>10.0</td><td>18.7</td><td>20.4</td><td>22.7</td></tr><tr><td>+ BT1 + FT1(*)</td><td colspan=\"3\">30.9 28.8 25.8</td><td>30.4</td><td>31.5</td><td>35.7</td><td>8.9</td><td>10.1</td><td>19.4</td><td>20.1</td><td>22.7</td></tr><tr><td>+ BT2</td><td colspan=\"3\">31.5 30.2 26.0</td><td>31.0</td><td>32.3</td><td>36.6</td><td>11.3</td><td>10.9</td><td>20.3</td><td>21.0</td><td>23.7</td></tr><tr><td>+ BT1 + BT2(*)</td><td colspan=\"3\">31.3 29.6 26.2</td><td>31.3</td><td>31.4</td><td>36.4</td><td>12.4</td><td>10.6</td><td>21.6</td><td>20.7</td><td>23.8</td></tr><tr><td>+ BT1 + BT2(**)</td><td colspan=\"3\">30.4 29.7 25.1</td><td>31.6</td><td>31.7</td><td>37.5</td><td>11.4</td><td>10.3</td><td>21.3</td><td>20.9</td><td>23.7</td></tr><tr><td colspan=\"4\">+ BT1&amp;2 + FT1&amp;2(*) 30.2 29.4 25.1</td><td>31.7</td><td>31.5</td><td>36.8</td><td>9.5</td><td>9.7</td><td>20.4</td><td>20.6</td><td>23.2</td></tr><tr><td>BT1</td><td colspan=\"3\">21.1 21.6 20.5</td><td>24.9</td><td>24.0</td><td>27.4</td><td>8.5</td><td>7.3</td><td>15.9</td><td>14.5</td><td>17.9</td></tr><tr><td>BT1(*)</td><td>8.4</td><td>8.6</td><td>18.7</td><td>19.9</td><td>11.8</td><td>13.4</td><td>6.9</td><td>5.3</td><td>12.9</td><td>9.2</td><td>12.3</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Monolingual data sets after preliminary cleaning (in sentences). et -Estonian, fi -Finnish, vro -V\u00f5ro, sme -North Saami, sma -South Saami.Model et-fi fi-et et-vro vro-et fi-sme sme-fi fi-sma sma-fi sme-sma sma-sme BLEU low",
"html": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF2": {
"content": "<table><tr><td>9 SacreBLEU</td><td>signature:</td><td>chrF2+lang.LANG-</td></tr><tr><td colspan=\"3\">LANG+numchars.6+numrefs.1+space.false+test.SET+</td></tr><tr><td colspan=\"3\">version.1.5.1 where LANG in {et,fi,vro,sme,sma}</td></tr></table>",
"type_str": "table",
"num": null,
"text": "it is clear that doing transfer learning for 8 SacreBLEU signature: BLEU+case.mixed+lang.LANG-LANG+numrefs.1+smooth.exp+test.SET+tok.13a+ version.1.4.14 where LANG in {et,fi,vro,sme,sma}",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Baseline</td><td>Vahts\u00f5 opimat\u00f5rjaali saamis\u00f5s oll t\u00e4hts\u00e4, et t\u00e4hts\u00e4 ol\u00f5s ka selge</td></tr><tr><td/><td>s\u00f5numiga t\u00f5sitsit luul\u00f5tuisi.</td></tr><tr><td>ML</td><td>Vahts\u00f5 nime v\u00e4ll\u00e4m\u00f5tlemisel oll t\u00e4hts\u00e4, et ol\u00f5si selge side</td></tr><tr><td/><td>paigap\u00e4\u00e4litse kogokunnaga ja et nimi avitas k\u00f5n\u00f5lda ettev\u00f5tte</td></tr><tr><td/><td>lugu.</td></tr><tr><td>+BT1+BT2(*)</td><td>Vahts\u00f5 nime v\u00e4ll\u00e4m\u00f5t\u00f5ld\u00f5n oll' t\u00e4hts\u00e4, et ol\u00f5s selge side</td></tr><tr><td/><td>paigap\u00e4\u00e4lidse kogokunnaga ja et nimi avitas k\u00f5n\u00f5lda ettev\u00f5tt\u00f5</td></tr><tr><td/><td>lugu.</td></tr><tr><td>Reference</td><td>Vahts\u00f5 nime v\u00e4ll\u00e4m\u00e4rkmise man oll' t\u00e4hts\u00e4, et ol\u00f5s selge k\u00f6\u00fcd\u00fcs</td></tr><tr><td/><td>paikligu kogokunnaga ja et nimi avitanuq jutustaq ettev\u00f5tmis\u00f5</td></tr><tr><td/><td>luku.</td></tr><tr><td>English</td><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "a) SourceUue nime v\u00e4ljam\u00f5tlemisel oli t\u00e4htis, et oleks selge side kohaliku kogukonnaga ja et nimi aitaks jutustada ettev\u00f5tte lugu.",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td colspan=\"2\">BLEU CHRF</td></tr><tr><td>ET-VRO baseline</td><td>14.6</td><td>0.393</td></tr><tr><td>ET-VRO on ET-FI weights</td><td>26.5</td><td>0.540</td></tr><tr><td>ML fine-tuned on ET-VRO</td><td>27.6</td><td>0.563</td></tr></table>",
"type_str": "table",
"num": null,
"text": "",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>Source</td><td>Samas tegutsevad kotkad kultuurmaastikul, mis t\u00e4hendab, et ka inimesel on t\u00e4htis roll selles, et neil h\u00e4sti l\u00e4heks.</td></tr><tr><td>Baseline</td><td>Samal aol om mi kotust\u00f5 per\u00e4nd\u00fcskultuurmaastikk\u00f5, mi\u00e4 t\u00e4hend\u00e4s, et inemisel t\u00e4hts\u00e4 om t\u00e4hts\u00e4, et n\u00e4il ol\u00f5-i t\u00e4hts\u00e4.</td></tr><tr><td>ML</td><td>Saman omma' kotka' kultuurmaastikul, mi\u00e4 t\u00e4hend\u00e4s, et ka inemisel om t\u00e4hts\u00e4 roll tan, et n\u00e4il h\u00e4ste l\u00e4\u00e4si.</td></tr><tr><td>+BT1+FT1(*)</td><td>Samal aol omma kotka kultuurmaastikul, mi\u00e4 t\u00e4hend\u00e4s, et ka inemisel om t\u00e4hts\u00e4 roll tuun, et n\u00e4il h\u00e4ste l\u00e4\u00e4si.</td></tr><tr><td>+BT1+BT2(*)</td><td>Saman toim\u00f5ndas\u00f5q kotkaq kultuurmaastikul, mi\u00e4 t\u00e4hend\u00e4s, et ka inemisel om t\u00e4hts\u00e4 roll tuun, et n\u00e4il h\u00e4ste l\u00e4\u00e4siq.</td></tr><tr><td>Reference</td><td>Saman toim\u00f5ndas\u00f5q kotkaq kultuurmaastikul, mi\u00e4 t\u00e4hend\u00e4s, et ka inemisel om t\u00e4hts\u00e4 roll tuu man, et n\u00e4il h\u00e4ste l\u00e4nn\u00fcq.</td></tr><tr><td>Source</td><td>Leevakul elab ametlikult pea 300 inimest.</td></tr><tr><td>Baseline</td><td>Leev\u00e4lapjo el\u00e4s t\u00e4hts\u00e4 p\u00e4\u00e4 inemist.</td></tr><tr><td>ML</td><td>Lev\u00e4kul el\u00e4s virallisesti p\u00e4\u00e4 300 inemist.</td></tr><tr><td>+BT1+FT1(*)</td><td>Leevakul el\u00e4s virallisesti p\u00e4\u00e4 300 inemist.</td></tr><tr><td>+BT1+BT2(*)</td><td>Leevakul el\u00e4s virallisesti pia 300 inemist.</td></tr><tr><td>Reference</td><td>Leevakul el\u00e4s kirjo perr\u00e4 pia 300 inemist.</td></tr><tr><td>B</td><td/></tr><tr><td>Source</td><td>A edesi ei n\u00e4eq t\u00fckk aigo tii p\u00e4\u00e4l\u00fcttegi v\u00f5rokiilset silti.</td></tr><tr><td>Baseline</td><td>Aga edasi ei n\u00e4inud t\u00fckk aega, tee\u00fchtegi saaklooma\u00e4ra.</td></tr><tr><td>ML</td><td>Aga edasi ei n\u00e4e t\u00fckk aega tee peal\u00fchtegi v\u00f5rukeelset ikka.</td></tr><tr><td>+BT1+FT1</td><td>Aga edasi ei n\u00e4e t\u00fckk aega tee peal\u00fchtegi v\u00f5rukeelset silti.</td></tr><tr><td>+BT1&amp;2+FT1&amp;2(*) Source</td><td>Nii om v\u00f5imalus telefon v\u00f5ita ka Uma Lehe telj\u00e4l.</td></tr><tr><td>Baseline</td><td>Nii on v\u00f5imalus telefongu ka Uma Pidoga mitmeti seotud.</td></tr><tr><td>ML</td><td>Nii on v\u00f5imalus telefon v\u00f5ita ka Uma Lehe telgis.</td></tr><tr><td>+BT1+FT1</td><td>Nii on v\u00f5imalus telefon v\u00f5ita ka Uma Lehe telgil.</td></tr><tr><td colspan=\"2\">+BT1&amp;2+FT1&amp;2(*) Nii on v\u00f5imalus telefon v\u00f5ita ka Uma Lehe telgil .</td></tr><tr><td>Reference</td><td>Nii on v\u00f5imalus telefon v\u00f5ita ka Uma Lehe tellijal.</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Aga edasi ei n\u00e4e t\u00fckk aega teel\u00fchtegi v\u00f5rukeelset silti. Reference Aga edasi ei n\u00e4e tee peal t\u00fckk aega\u00fchtegi v\u00f5rukeelset silti. Source\u00dctelt puult tul\u00f5 hoita vannu m\u00f5tsu, et n\u00e4 saanu ummi pessi kohegi ehit\u00e4. Baseline\u00dchis poolt tuleb hoida vanade metsade, et nad saanud m\u00e4rkimisv\u00e4\u00e4rset kuhugi ehitanud. ML\u00dchel pool tuleb hoida vana metsa, et nad saaksid oma pesu ehitada. +BT1+FT1\u00dchel pool tuleb hoida vana metsa, et nad saaksid oma pessi kuhugi ehitada. +BT1&2+FT1&2(*)\u00fchelt poolt tuleb hoida vanu metsasid, et nad saaksid oma pessi kuhugi ehitada. Reference\u00dchelt poolt tuleb hoida vanu metsi, et nad saaks oma pesasid kuhugi ehitada. Source Uutta nime\u00e4 keksiess\u00e4 oli t\u00e4rke\u00e4\u00e4, ett\u00e4 olisi selke\u00e4 yhteys paikalliseen yhteis\u00f6\u00f6n ja ett\u00e4 nimi auttaisi kertomaan yrityksen tarinan. Baseline M uhccin muitalin lei deh\u00e1la\u0161, ahte liv\u010d\u010dii\u010dielga oktavuohta b\u00e1ikk\u00e1la\u0161 servodahkii ja ahte namma veahkehiv\u010d\u010dii muitalit lihkastagaide. ML Odda namma lei deh\u00e1la\u0161, ahte liv\u010d\u010dii\u010dielga oktavuohta b\u00e1ikk\u00e1la\u0161 servo\u0161ii ja ahte namma veahkehiv\u010d\u010dii muitalit fitnodaga m\u00e1idnasiid. +BT1",
"html": null
},
"TABREF6": {
"content": "<table><tr><td>Model</td><td>et-fi</td><td colspan=\"4\">fi-et et-vro vro-et fi-sme sme-fi fi-sma sma-fi sme-sma sma-sme CHRF low</td></tr><tr><td>Baselines</td><td colspan=\"2\">0.602 0.573 0.353 0.390 0.577 0.577 0.282 0.274</td><td>0.330</td><td>0.301</td><td>0.385</td></tr><tr><td>Multilingual (ML)</td><td colspan=\"2\">0.595 0.578 0.510 0.540 0.631 0.650 0.376 0.348</td><td>0.546</td><td>0.525</td><td>0.516</td></tr><tr><td>+ BT1</td><td colspan=\"2\">0.600 0.583 0.531 0.551 0.639 0.659 0.408 0.348</td><td>0.557</td><td>0.531</td><td>0.528</td></tr><tr><td>+ BT1(*)</td><td colspan=\"2\">0.592 0.575 0.526 0.556 0.639 0.659 0.420 0.349</td><td>0.566</td><td>0.532</td><td>0.531</td></tr><tr><td>+ BT1 + FT1</td><td colspan=\"2\">0.595 0.584 0.526 0.558 0.634 0.654 0.369 0.353</td><td>0.544</td><td>0.533</td><td>0.525</td></tr><tr><td>+ BT1 + FT1(*)</td><td colspan=\"2\">0.596 0.575 0.537 0.558 0.636 0.656 0.392 0.349</td><td>0.551</td><td>0.531</td><td>0.525</td></tr><tr><td>+ BT2</td><td colspan=\"2\">0.598 0.585 0.535 0.560 0.640 0.663 0.418 0.358</td><td>0.563</td><td>0.537</td><td>0.534</td></tr><tr><td>+ BT1 + BT2(*)</td><td colspan=\"2\">0.595 0.583 0.539 0.565 0.636 0.663 0.436 0.364</td><td>0.569</td><td>0.539</td><td>0.539</td></tr><tr><td>+ BT1 + BT2(**)</td><td colspan=\"2\">0.594 0.579 0.530 0.563 0.640 0.665 0.423 0.354</td><td>0.567</td><td>0.536</td><td>0.535</td></tr><tr><td colspan=\"3\">+ BT1&amp;2 + FT1&amp;2(*) 0.592 0.578 0.530 0.565 0.634 0.662 0.399 0.350</td><td>0.564</td><td>0.539</td><td>0.530</td></tr><tr><td>BT1</td><td colspan=\"2\">0.526 0.515 0.480 0.523 0.582 0.607 0.371 0.296</td><td>0.524</td><td>0.488</td><td>0.484</td></tr><tr><td>BT1(*)</td><td colspan=\"2\">0.349 0.356 0.455 0.473 0.459 0.443 0.348 0.253</td><td>0.488</td><td>0.402</td><td>0.415</td></tr></table>",
"type_str": "table",
"num": null,
"text": "Translation examples. A: Estonian-V\u00f5ro, B: V\u00f5ro-Estonian, C: Finnish-North Saami. blueincorrect word, violet -incorrect form/case/tense or partially incorrect.",
"html": null
}
}
}
}