ACL-OCL / Base_JSON /prefixB /json /bucc /2021.bucc-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:07:59.019969Z"
},
"title": "Effective Bitext Extraction From Comparable Corpora Using a Combination of Three Different Approaches",
"authors": [
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Reykjavik University",
"location": {
"country": "Iceland"
}
},
"email": ""
},
{
"first": "Pintu",
"middle": [],
"last": "Lohar",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Hrafn",
"middle": [],
"last": "Loftsson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Reykjavik University",
"location": {
"country": "Iceland"
}
},
"email": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Parallel sentences extracted from comparable corpora can be useful to supplement parallel corpora when training machine translation (MT) systems. This is even more prominent in low-resource scenarios, where parallel corpora are scarce. In this paper, we present a system which uses three very different measures to identify and score parallel sentences from comparable corpora. We measure the accuracy of our methods in low-resource settings by comparing the results against manually curated test data for English-Icelandic, and by evaluating an MT system trained on the concatenation of the parallel data extracted by our approach and an existing data set. We show that the system is capable of extracting useful parallel sentences with high accuracy, and that the extracted pairs substantially increase translation quality of an MT system trained on the data, as measured by automatic evaluation metrics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Parallel sentences extracted from comparable corpora can be useful to supplement parallel corpora when training machine translation (MT) systems. This is even more prominent in low-resource scenarios, where parallel corpora are scarce. In this paper, we present a system which uses three very different measures to identify and score parallel sentences from comparable corpora. We measure the accuracy of our methods in low-resource settings by comparing the results against manually curated test data for English-Icelandic, and by evaluating an MT system trained on the concatenation of the parallel data extracted by our approach and an existing data set. We show that the system is capable of extracting useful parallel sentences with high accuracy, and that the extracted pairs substantially increase translation quality of an MT system trained on the data, as measured by automatic evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "High quality MT systems rely on the availability of parallel data. In low-resource settings, where parallel data is scarce, unsupervised methods have been proposed, where only monolingual corpora are used for training (Artetxe et al., 2018; Lample et al., 2018) . Kim et al. (2020) show that supervised and semi-supervised approaches with only a small parallel corpus of 50K bilingual sentences consistently outperform the best unsupervised systems for a range of languages. However, there is a scarcity of parallel data, especially for languages with a low number of speakers. When parallel corpora are scarce, comparable corpora, which are far more common, can be used to supplement it. We will be working with the English-Icelandic language pair, for which no statistical or neural MT work had been published until last year (J\u00f3nsson et al., 2020) .",
"cite_spans": [
{
"start": 218,
"end": 240,
"text": "(Artetxe et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 241,
"end": 261,
"text": "Lample et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 264,
"end": 281,
"text": "Kim et al. (2020)",
"ref_id": "BIBREF21"
},
{
"start": 828,
"end": 850,
"text": "(J\u00f3nsson et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When parallel sentences are extracted from comparable corpora, potential parallel sentence candidates can usually come from anywhere in two comparable documents. This means that a potential parallel counterpart of one sentence in the source-language document can be any sentence in the target-language document. If the average number of sentences in a comparable document is n, the number of potential sentence pairs that have to be evaluated are n 2 . This quickly becomes overwhelming (as n increases) and so it is imperative to reduce the search space. Reducing the search space should ideally result in a list of a maximum of kxn candidates, where k is a constant number of allowed candidates for each sentence in the comparable documents. To retrieve useful sentence pairs from this list, the pairs have to be scored and filtered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach divides the problem into two main steps. We start by extracting parallel sentence candidates using an inverted index-based crosslingual information retrieval (CLIR) tool called FaDA (Lohar et al., 2016) , that requires a collection of documents in two languages and only a bilingual lexicon without the need of any MT system. In the second step, we score the sentence candidates using two different scores, one based on contextualized embeddings and the other on high-precision word alignments. A binary classifier selects sentence pairs based on these scores.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Lohar et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test our approach in three different ways. We use two different test sets to measure precision, recall and F1-scores, and we also use our approach to extract parallel sentences from Wikipedia and use the resulting data as supplemental data for training NMT systems. The systems are then evaluated in terms of BLEU scores (Papineni et al., 2002 ) and compared to a baseline in order to give an indication of the usefulness of the supplemental data for NMT training.",
"cite_spans": [
{
"start": 324,
"end": 346,
"text": "(Papineni et al., 2002",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are fourfold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that the combination of three different measures -CLIR, and scores based on contextualized embeddings and high precision word alignments -can effectively extract parallel sentence pairs from comparable corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We introduce WAScore, a score based on high precision word alignments and show its usefulness in filtering parallel sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publish two different test sets for measuring the effectiveness of parallel sentence extraction from comparable corpora for the English-Icelandic language pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We publish a set of parallel sentences extracted from Wikipedia, shown to be useful for MT training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Comparable corpora have been shown to be a useful source for mining parallel segments that can help improve MT quality (Wolk et al., 2016; Hangya and Fraser, 2019) . Afli et al. (2015) extract parallel data from a multimodal comparable corpus from the Euronews 1 and TED 2 web sites. Chu et al. (2015) extract parallel texts from the Chinese and Japanese Wikipedia and Ling et al. (2014) employ a crowdsourcing approach to extract parallel text from Twitter data in order to find the translations in tweets. The work of Karimi et al. (2018) describes the approach of extracting parallel sentences from English-Persian document-aligned Wikipedia entries. They use two MT systems to translate from Persian to English and the reverse and then use an IR system to measure the similarity of the translated sentences. Multilingual sentence embeddings have also been applied to the problem, obtaining state-of-the-art performance (Schwenk, 2018; Artetxe and Schwenk, 2019b) . Recently, Ramesh et al. (2021) describe the collection of parallel corpora for 11 Indic languages from diverse comparable corpora using LaBSE embeddings (Feng et al., 2020) , a language-agnostic BERT sentence embedding model trained and optimized to produce similar representations for bilingual sentence pairs that are translations of each other.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Wolk et al., 2016;",
"ref_id": "BIBREF42"
},
{
"start": 139,
"end": 163,
"text": "Hangya and Fraser, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 166,
"end": 184,
"text": "Afli et al. (2015)",
"ref_id": "BIBREF0"
},
{
"start": 284,
"end": 301,
"text": "Chu et al. (2015)",
"ref_id": "BIBREF10"
},
{
"start": 520,
"end": 540,
"text": "Karimi et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 923,
"end": 938,
"text": "(Schwenk, 2018;",
"ref_id": "BIBREF33"
},
{
"start": 939,
"end": 966,
"text": "Artetxe and Schwenk, 2019b)",
"ref_id": "BIBREF4"
},
{
"start": 979,
"end": 999,
"text": "Ramesh et al. (2021)",
"ref_id": null
},
{
"start": 1122,
"end": 1141,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Word alignments have previously been used for parallel sentence extraction. Zarin , a et al. 2015identify parallel sentences using word alignments, experimenting with five different alignment based scores. They presume that if a pair of sentences are equivalent in two languages, there should be many word alignments between the sentences, and non-parallel sentences should have few or no word alignments. Stymne et al. (2013) use alignment based heuristics to filter out sentence pairs. Lu et al. (2020) use a word alignment based translation score as a part of their scoring ensemble for filtering a noisy parallel corpus. Their translation score is a simplified version of the translation score introduced by Khadivi and Ney (2005) . Azpeitia et al. (2017) and Andoni Azpeitia and Garcia (2018) describe a method using CLIR and lexical translations obtained using word alignments, with a simple overlap metric. They obtained the highest results for the BUCC 2017 and BUCC 2018 shared tasks.",
"cite_spans": [
{
"start": 406,
"end": 426,
"text": "Stymne et al. (2013)",
"ref_id": "BIBREF38"
},
{
"start": 712,
"end": 734,
"text": "Khadivi and Ney (2005)",
"ref_id": "BIBREF19"
},
{
"start": 737,
"end": 759,
"text": "Azpeitia et al. (2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our method uses an IR system to create a list of alignment candidates, thus reducing the search space. It then takes advantage of both LaBSE embeddings and word alignments. Our word alignment score is calculated by a simpler formula than most of the previous work, but relies on high precision alignments. It has been shown that they can be achieved by an ensemble method using Comb-Align (Steingr\u00edmsson et al., 2021) . A binary classifier is finally used to select acceptable sentence pairs.",
"cite_spans": [
{
"start": 389,
"end": 417,
"text": "(Steingr\u00edmsson et al., 2021)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For the language pair we are working with, English-Icelandic, no test sets have previously been made available for parallel sentence extraction from comparable corpora. Therefore, we have to build test sets in order to be able to evaluate our approach. We prepare the following data sets for our experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 CompNews: development and test sets using available news data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 CompWiki: a manually curated small test set for Wikipedia data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 CompTrain: training data for our logistic regression classifier, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 CompLex: an English-Icelandic lexicon for word translation in an IR system. All the data sets are published with open licenses on GitHub and in a CLARIN repository.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We built development and test sets for identifying parallel sentences in news corpora, in similar style to the test sets compiled for the BUCC 2017 shared task on parallel sentence identification (Zweigenbaum et al., 2016), i.e. consisting of a small set of known parallel sentences, as well as a larger list of randomly sampled sentences from monolingual corpora in the same domain, but with no known parallel pairs. The parallel sentences used are the 2000 English-Icelandic sentence pairs made available as development data for the news translation task in WMT 2021. 3 The dev set for WMT 2021 contains 1000 sentences in each direction. The nonparallel sentences were randomly selected from Newscrawl 2018, and 2018 news texts sampled from the Icelandic Gigaword Corpus (Steingr\u00edmsson et al., 2018) .",
"cite_spans": [
{
"start": 570,
"end": 571,
"text": "3",
"ref_id": null
},
{
"start": 773,
"end": 801,
"text": "(Steingr\u00edmsson et al., 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CompNews",
"sec_num": "3.1"
},
{
"text": "The texts were split into sentences. This resulted in two lists of 100, 000 sentences, English and Icelandic, with 2% of sentences in each list known to have a corresponding sentence in the other language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CompNews",
"sec_num": "3.1"
},
{
"text": "We made a 40/60 split, taking care that the true parallel sentence pairs were equally distributed between the splits. The smaller part was used as a development set and the larger part as a test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CompNews",
"sec_num": "3.1"
},
{
"text": "We randomly selected 15 Wikipedia articles available in both Icelandic and English. The texts were split into sentences and the CLIR tool (see Section 4.1) used to obtain translation candidates for each sentence. These sentence pairs were manually evaluated and marked as parallel, partially parallel or non-parallel. Out of a total of 10, 098 sentences, 86 were marked parallel and 421 as partially parallel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CompWiki",
"sec_num": "3.2"
},
{
"text": "In order to gain some information on the kind of scores the two scoring methods give to non-parallel data, on the one hand, and parallel data, on the other hand, we compiled a dataset with 50, 000 randomly sampled pairs from the two monolingual corpora used for CompNews and added parallel sentences from the English-Icelandic ParIce corpus (Barkarson and Steingr\u00edmsson, 2019). We selected 2, 500 random sentence pairs from a development set published with the corpus and filtered all sentences that have a minimum length of six tokens. This resulted in 1, 743 sentence pairs, marked as positive data for a classifier. The resulting 51, 743 sentence pairs are scored in the same way we score the parallel sentence candidates (see Section 4.2) and used to train the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CompTrain",
"sec_num": "3.3"
},
{
"text": "FaDA, the cross-lingual information retrieval tool we use to obtain parallel pair candidates, requires a bilingual lexicon with lexical translation probabilities of words. It uses the lexicon to translate the query terms in the source language and searches these translated terms in the target-language index to retrieve the equivalent candidate sentences in the target language. It is described in more details in Section 4.1. As such a lexicon did not exist, we compiled it using a combination of approaches. We collected data that was available online, an English-Icelandic dictionary from Apertium (Brandt et al., 2011) , Wiktionary entries and Wikipedia article titles. We obtained permission to use the bilingual ISLEX-dictionaries (\u00dalfarsd\u00f3ttir, 2014), which go from Icelandic to five Nordic languages (Danish, Faroese, Finnish, Norwegian and Swedish) and used these to pivot to English using the aforementioned open dictionaries. We created word lists using word alignments to extract pairs from the ParIce corpus after lemmatizing both languages using SpaCy 4 for English and Nefnir (Ing\u00f3lfsd\u00f3ttir et al., 2019) and DIM (Bjarnad\u00f3ttir et al., 2019) for Icelandic. We selected the most likely English equivalents for a list of Icelandic words using crosslingual word embeddings models based on Vecalign 5 (Thompson and Koehn, 2019) . In addition, we translated both Icelandic words and words from the Nordic ISLEX-dictionaries, using models from OPUS-MT (Tiedemann and Thottingal, 2020) . This resulted in a long list of word translation candidates which we then filtered using a threshold that required that each candidate was suggested by multiple sources. For each source word, we counted how many sources suggested that candidate and used the count to assign likelihood scores to the translations. This resulted in two files, an English-Icelandic lexicon with 140K entries and an Icelandic-English lexicon with 152K entries.",
"cite_spans": [
{
"start": 602,
"end": 623,
"text": "(Brandt et al., 2011)",
"ref_id": "BIBREF9"
},
{
"start": 1129,
"end": 1156,
"text": "(Bjarnad\u00f3ttir et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1312,
"end": 1338,
"text": "(Thompson and Koehn, 2019)",
"ref_id": "BIBREF39"
},
{
"start": 1461,
"end": 1493,
"text": "(Tiedemann and Thottingal, 2020)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CompLex",
"sec_num": "3.4"
},
{
"text": "We make use of an open source CLIR-based bilingual document alignment tool called FaDA (Lohar et al., 2016) in the first step of the alignment process. This tool is capable of aligning bilin-gual documents without the help of any MT system. In contrast, the MT-based alignment systems need additional time for translating all the sourcelanguage sentences into the target language. Therefore, FaDA reduces the computational overhead by skipping the translation process. As FaDA performs alignments at the document level, we consider each sentence separately and store it in a single document. Each document in our corpus therefore contains a single line of text. We then use the following functionalities of FaDA in our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Using CLIR",
"sec_num": "4.1"
},
{
"text": "(i) Indexing: First, we index both the sourcelanguage and the target-language documents, (ii) Pseudo-query construction: Secondly, we construct a pseudo-query 6 from each sourcelanguage document using the terms selection procedure as shown in Equation (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Using CLIR",
"sec_num": "4.1"
},
{
"text": "\u03c4 (t, d) = \u03bb tf (t, d) len(d) + (1 \u2212 \u03bb) log( N df (t) ) (1) tf (t, d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Using CLIR",
"sec_num": "4.1"
},
{
"text": ") refers to the term frequency of a term t in a document d. len(d) denotes the length of d, and N and df (t) represents the total number of documents and the number of documents in which t occurs, respectively. \u03c4 (t, d) denotes the term-selection score which is a linear combination of the normalised term frequency of a term t in d, and the inverse document frequency (idf) of the term. The parameter \u03bb controls the relative importance of tf and idf . We recommend the work of Lohar et al. (2016) for more details on pseudo-query construction. (iii) Word translation: We then translate all the pseudo-query terms into the target-language with an English-Icelandic dictionary and search the translated query terms in the target-language index, (iv) Document retrieval: Finally, we retrieve the top-n 7 target-language documents that are semantically equivalent to the source-language documents according to the IR-based retrieval.",
"cite_spans": [
{
"start": 478,
"end": 497,
"text": "Lohar et al. (2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Alignment Using CLIR",
"sec_num": "4.1"
},
{
"text": "In the first step, the application of FaDA provides 10 (default value) target-language sentence candidates for each source-language sentence. This is done in both translation directions. We assume that most truly parallel sentences would be found in either direction and thus we create a subset of the FaDA outputs that contains an intersection of the candidate list for both directions. In order to test this hypotheses, we also create a union of both outputs when working with one of the test sets, CompNews.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Scoring",
"sec_num": "4.2"
},
{
"text": "We score our candidate lists using two methods, LaBSE (Feng et al., 2020) , and WAScore, a word alignment-based score of our own device. Feng et al. (2020) show that LaBSE gives good results on the BUCC mining task when working with high-resource languages. However, the accuracy is reduced when working with less-resourced languages. In order to increase the accuracy of our extraction method, we use it together with another scoring mechanism that uses a very different approach. WAScore is calculated by collecting high precision word alignments using CombAlign (Steingr\u00edmsson et al., 2021) . CombAlign uses a set of word alignment tools to perform the alignment and it has settings to aim for high precision or high recall, taking advantage of the fact that different alignment tools tend to make different guesses unless the alignment probabilities are high. We aim for high precision, thus removing most alignments that are not very likely to be correct. As this can be achieved by CombAlign, it makes WAScore an effective mechanism for measuring parallelism. CombAlign uses the following tools in our experiment; (i) AWESoME (Dou and Neubig, 2021), (ii) eflomal (\u00d6stling and Tiedemann, 2016) , and (iii) fast_align (Dyer et al., 2013) . WAScore is calculated for each sentence using Equation 2:",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 137,
"end": 155,
"text": "Feng et al. (2020)",
"ref_id": "BIBREF13"
},
{
"start": 565,
"end": 593,
"text": "(Steingr\u00edmsson et al., 2021)",
"ref_id": "BIBREF37"
},
{
"start": 1169,
"end": 1198,
"text": "(\u00d6stling and Tiedemann, 2016)",
"ref_id": "BIBREF29"
},
{
"start": 1222,
"end": 1241,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Scoring",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(s a /s) * (t a /t)",
"eq_num": "(2)"
}
],
"section": "Sentence Scoring",
"sec_num": "4.2"
},
{
"text": "where s is the number of words in the source sentence and s a is the number of source words that are aligned to some word in the target sentence, t is the number of words in the target sentence, and t a is the number of target words that are aligned to some word in the source sentence. With a set of highly likely alignments for each sentence pair, the WAScore tends to favour sentences of similar length as a much longer sentence on one side usually has proportionately few alignment edges on that side which lowers the score substantially. In contrast, if a shorter sentence on one side has all tokens aligned to a longer sentence on the other side, it can result in a reasonable score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Scoring",
"sec_num": "4.2"
},
{
"text": "Pr. Rc. F 1 Intersection 0.95 0.80 0.87 Union 0.92 0.86 0.86 Such pairs are often partially parallel and using the CompWiki test set (Section 5.2) we see that our approach is suitable for extracting partially parallel pairs as well as truly parallel ones. Finally, we use logistic regression to classify whether a sentence is parallel or not. All sentences accepted by the classifier are labelled as parallel sentences. The classifier is trained on the CompTrain training set, detailed earlier in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Set",
"sec_num": null
},
{
"text": "We evaluate our system by calculating precision, recall and F1-scores using our (i) CompNews test set and (ii) CompWiki test set; and (iii) by training, testing and calculating BLEU scores for NMT systems, both with and without parallel sentences extracted from all Wikipedia articles that are available in both English and Icelandic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "The first experiment is on the CompNews test data, with the simple goal of extracting as many parallel sentence pairs as can be found from the two lists of 100K sentences in English and Icelandic. After running FaDA we obtain 10 candidates for each of the 100K sentences in each language. We create two different candidate sets, one by taking an intersection of both directions, en\u2192is and is\u2192en, and the other by taking a union of the two directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing on News Data",
"sec_num": "5.1"
},
{
"text": "The intersection set contains 135K sentence pairs and an inspection of the set revealed that it included 1, 693 of the total 2, 000 known parallel sentence pairs in the data. The union set on the other hand had a total of 1.86 million pairs and 1, 871 of the 2, 000 correct sentence pairs. We calculate LaBSE scores and WAScore for each of the candidates and apply our logistic regression classifier on the scores. The F-scores for both approaches were similar, but using the union data set obtains higher recall while using the intersection data obtains better precision. Table 1 shows the final results for the CompNews test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 573,
"end": 580,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Testing on News Data",
"sec_num": "5.1"
},
{
"text": "The preparation of CompWiki was described in Section 3.2. It contains texts from 15 Wikipedia article pairs with a total of 10, 098 sentence pairs. We score the sentences in the same way as discussed before, using LaBSE and WAScore, and run our classifier on the scores. 200 sentence pairs are deemed parallel by our classifier. 77 of them are marked parallel in the test set, 90 are marked partially parallel and 33 are marked non-parallel. As can be seen in Table 2 , our method achieves high recall on the sentences marked parallel, and 84% of our systems output is either marked parallel or partially parallel.",
"cite_spans": [],
"ref_spans": [
{
"start": 460,
"end": 467,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Testing on Wiki Data",
"sec_num": "5.2"
},
{
"text": "We collect all texts from Wikipedia articles that are linked and available both in English and Icelandic. The collection contains 412,442 Icelandic sentences and 4,259,150 English sentences from 35,690 article pairs. In our setup, FaDA searches for the parallel candidates in the paired documents. The candidate pairs are then scored as before and classified as parallel or non-parallel. Our system yields 55,744 sentence pairs that are classified as parallel sentences. There have been previous efforts in extracting parallel sentence from the Wikipedia corpus. One of the largest such efforts is the WikiMatrix project (Schwenk et al., 2021 ) that mined parallel sentences in 1, 620 language pairs. When we compare the en-is language pair in WikiMatrix to the output of our system, the first obvious difference is that the WikiMatrix dataset has a lot more data, 314K sentence pairs compared to our 56K. To compare the usefulness of the datasets, we trained an NMT system using Marian MT (Junczys-Dowmunt et al., 2018) in one direction, is\u2192en, on 50K sentence pairs randomly sampled from the ParIce corpus and compared it to a system where WikiMatrix was added as supplemental data, and to a system where the results of our approach was used to supplement the ParIce data, using the same hyperparameters.",
"cite_spans": [
{
"start": 621,
"end": 642,
"text": "(Schwenk et al., 2021",
"ref_id": "BIBREF34"
},
{
"start": 990,
"end": 1020,
"text": "(Junczys-Dowmunt et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "We compare BLEU scores for the different setups on a combination of three test sets (Barkarson and Steingr\u00edmsson, 2020) , as well as on each of the test sets individually: TestEEA -containing sentence pairs from European Economic Area regulatory documents; TestEMA -containing sentence pairs from EMA drug descriptions; and TestOScontaining sentence pairs from OpenSubtitles. Test-EEA and TestEMA are extracted from rather specialized texts, and generally have long sentences, while TestOS is from a rather open domain and tends to have shorter sentences. The test sets are used as filtered by J\u00f3nsson et al. (2020) . All the sentence pairs in the test sets have been manually checked for correctness.",
"cite_spans": [
{
"start": 84,
"end": 119,
"text": "(Barkarson and Steingr\u00edmsson, 2020)",
"ref_id": "BIBREF7"
},
{
"start": 594,
"end": 615,
"text": "J\u00f3nsson et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "The fact that each of these three test sets are domain specific and that our NMT systems are not trained specifically on data from these domains, together with how small the training data sets are, results in low BLEU scores. But while the BLEU scores are quite low, the effect of our approach is evident.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "We can see from Table 3 that when the Wiki-Matrix data is added to the 50K parallel sentences, the translation system trained on this augmented data set produces significantly lower BLEU scores as compared to the other two systems for the two test sets (TestEEA and TestEMA). However, it ob-tains higher BLEU scores than the baseline system (i.e, the system which is trained with only the 50K data) for the third test set (TestOS). In contrast, the system trained on the concatenation of the 50K sentence pairs and the data obtained from our approach significantly improves the BLEU scores for all the test sets, even though the number of sentence pairs in our data is less than 20% of the number of sentence pairs in WikiMatrix. This is most likely due to noise in WikiMatrix, as it has been shown that NMT is sensitive to noise in the training data (Khayrallah and Koehn, 2018) .",
"cite_spans": [
{
"start": 851,
"end": 879,
"text": "(Khayrallah and Koehn, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "Upon manual inspection of our data we see that our classifier accepted some sentence pairs even though they have a very low WAScore. We therefore train a number of NMT models using our data but apply thresholds for WAScore. As seen in Figure 2, the BLEU score rises when a low threshold is set, and then fluctuates when the threshold is raised, reaching the highest BLEU score for our combined test sets at a WAScore threshold of around 0.14. A WAScore of 0.14 means that if we have a pair of sentences containing ten tokens each, three tokens in one sentences align with four tokens in the other. If there are fewer alignments the sentence pair will not be accepted. At this threshold level we extract 34K parallel pairs to use for training. With further threshold filtering, we lose more beneficial data than detrimental data, and the BLEU score starts slipping down. This is an indicator of the usefulness of this scoring mechanism for MT train-ing, showing that the score correlates with sentence pair parallelism, raising the BLEU score when it is used for filtering, and keeping it raised even though supplemental training data is reduced.",
"cite_spans": [],
"ref_spans": [
{
"start": 235,
"end": 241,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "All of our data sets, for training and testing are available on Github, as well as a description of MarianMT training setup 8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Sentence Extraction and MT Training",
"sec_num": "5.3"
},
{
"text": "We have shown that our method, combining crosslingual information extraction, contextualized embeddings and word alignments, is efficient at finding parallel segments in comparable corpora. Furthermore we introduce WAScore, a metric of translational equivalence based on high-precision word alignments, and show that as well as being a useful part of a binary classifier, it can be used effectively to filter out detrimental segments from parallel corpora. Finally, we publish two new test sets for extracting parallel sentences from comparable corpora, an automatically generated English-Icelandic lexicon with probability scores and a set of automatically extracted parallel segments that we show are useful for training MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future work",
"sec_num": "6"
},
{
"text": "When testing on the CompWiki test set we saw that while our method is efficient in finding parallel segments in comparable corpora, it also selects partially parallel segments. Although these segments seem to have information useful for training MT systems, it is difficult to know to what extent they are useful and when they may become detrimental. For this reason, we plan to study these kinds of data further and investigate how they affect translation quality of an NMT system trained on it. Based on that, we want to explore more sophisticated ways to segment or concatenate alignment candidates in order to be able to build a data set that only contains segment pairs that are useful for training MT systems. There is previous work on parallel fragment extraction using word alignments (Yeong et al., 2019) , and we will use their approach as a baseline to proceed further.",
"cite_spans": [
{
"start": 793,
"end": 813,
"text": "(Yeong et al., 2019)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future work",
"sec_num": "6"
},
{
"text": "While the combination of the two scores used to measure the quality of the sentence pairs resulted in a list of sentence pairs that we show are useful for MT training, it still contains pairs that are detrimental, as shown by the simple filtering based on WAScore threshold. Other parallel sentence pairs may also remain to be found in the Wikipedia data. In order to improve our approach, more scores could be added to our classifier. While we opted to use raw LaBSE cosine similarity scores, shown by (Feng et al., 2020) to be more accurate than cosine similarity scores from other models, the margin-based ratio score proposed by Artetxe and Schwenk (2019a) has also been shown to be very effective for this task. Other scores to consider could include BLEU or ChrF (Popovi\u0107, 2015) , although they need reasonably good MT systems to be useful, margin-based cosine distance (Artetxe and Schwenk, 2019a) , or Mahalanobis distance (Mahalanobis, 1936) as described in Littell et al. (2018) . Doing an ablation study on the scores could help determine which are the most useful. Working with these scores, a comparison of applying different classifiers while using the same scoring mechanisms may be helpful. It is also to be noted that we extracted only 10 target-language candidate pairs in the first step, which is the default value used in FaDA as it gave optimal performance in their work. It also has the benefit of reducing the computational complexity in the next steps. However, we also plan to explore other higher values of candidate extraction in future and to investigate how it affects the overall system performance. Finally, we plan to conduct our experiments on other language pairs.",
"cite_spans": [
{
"start": 503,
"end": 522,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 633,
"end": 660,
"text": "Artetxe and Schwenk (2019a)",
"ref_id": "BIBREF3"
},
{
"start": 769,
"end": 784,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF31"
},
{
"start": 876,
"end": 904,
"text": "(Artetxe and Schwenk, 2019a)",
"ref_id": "BIBREF3"
},
{
"start": 967,
"end": 988,
"text": "Littell et al. (2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future work",
"sec_num": "6"
},
{
"text": "https://www.euronews.com/ 2 https://www.ted.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at: http://statmt.org/wmt21/ translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spacy.io 5 https://github.com/thompsonb/vecalign",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A pseudo-query is the modified version of the original query to improve the ranking of document retrieval. The terms in a pseudo-query are considered to be suitably representative of a document 7 Note that n = 10 is the default value of n in FaDA. This means that the tool retrieves the top 10 candidate targetlanguage documents by default.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the Language Technology Programme for Icelandic 2019-2023, funded by the Icelandic government, and by the ADAPT Centre for Digital Content Technology which is funded under the Science Foundation Ireland (SFI) Research Centres Programme (Grant No. 13/RC/2106) and is co-funded under the European Regional Development Fund.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Building and using multimodal comparable corpora for machine translation",
"authors": [
{
"first": "Haithem",
"middle": [],
"last": "Afli",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2015,
"venue": "Natural Language Engineering",
"volume": "22",
"issue": "4",
"pages": "603--625",
"other_ids": {
"DOI": [
"10.1017/S1351324916000152"
]
},
"num": null,
"urls": [],
"raw_text": "Haithem Afli, Lo\u00efc Barrault, and Holger Schwenk. 2015. Building and using multimodal comparable corpora for machine translation. Natural Language Engineering, 22(4):603 -625.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Extracting parallel sentences from comparable corpora with stacc variants",
"authors": [
{
"first": "Andoni",
"middle": [],
"last": "Thierry Etchegoyhen",
"suffix": ""
},
{
"first": "Eva Mart\u00ednez",
"middle": [],
"last": "Azpeitia",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garcia",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thierry Etchegoyhen Andoni Azpeitia and Eva Mart\u00ednez Garcia. 2018. Extracting paral- lel sentences from comparable corpora with stacc variants. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evaluation (LREC 2018), Paris, France.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised Statistical Machine Translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1399"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Unsupervised Statistical Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Marginbased Parallel Corpus Mining with Multilingual Sentence Embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3197--3203",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1309"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019a. Margin- based Parallel Corpus Mining with Multilingual Sen- tence Embeddings. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3197-3203, Florence, Italy.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "597--610",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00288"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019b. Mas- sively Multilingual Sentence Embeddings for Zero- Shot Cross-Lingual Transfer and Beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Weighted set-theoretic alignment of comparable sentences",
"authors": [
{
"first": "Andoni",
"middle": [],
"last": "Azpeitia",
"suffix": ""
},
{
"first": "Eva Mart\u00ednez",
"middle": [],
"last": "Thierry Etchegoyhen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Garcia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 10th Workshop on Building and Using Comparable Corpora",
"volume": "",
"issue": "",
"pages": "41--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andoni Azpeitia, Thierry Etchegoyhen, and Eva Mart\u00ednez Garcia. 2017. Weighted set-theoretic alignment of comparable sentences. In Proceedings of the 10th Workshop on Building and Using Com- parable Corpora, pages 41-45, Vancouver, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Compiling and Filtering ParIce: An English-Icelandic Parallel Corpus",
"authors": [
{
"first": "Starka\u00f0ur",
"middle": [],
"last": "Barkarson",
"suffix": ""
},
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "140--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starka\u00f0ur Barkarson and Stein\u00fe\u00f3r Steingr\u00edmsson. 2019. Compiling and Filtering ParIce: An English- Icelandic Parallel Corpus. In Proceedings of the 22nd Nordic Conference on Computational Linguis- tics, pages 140-145, Turku, Finland.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ParIce dev/test/train splits 20.05. CLARIN-IS",
"authors": [
{
"first": "Starka\u00f0ur",
"middle": [],
"last": "Barkarson",
"suffix": ""
},
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Starka\u00f0ur Barkarson and Stein\u00fe\u00f3r Steingr\u00edmsson. 2020. ParIce dev/test/train splits 20.05. CLARIN-IS.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "DIM: The Database of Icelandic Morphology",
"authors": [
{
"first": "Krist\u00edn",
"middle": [],
"last": "Bjarnad\u00f3ttir",
"suffix": ""
},
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Krist\u00edn Ingibj\u00f6rg Hlynsd\u00f3ttir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "146--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krist\u00edn Bjarnad\u00f3ttir, Krist\u00edn Ingibj\u00f6rg Hlynsd\u00f3ttir, and Stein\u00fe\u00f3r Steingr\u00edmsson. 2019. DIM: The Database of Icelandic Morphology. In Proceedings of the 22nd Nordic Conference on Computational Linguis- tics, pages 146-154, Turku, Finland.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Apertium-IceNLP: A rule-based Icelandic to English machine translation system",
"authors": [
{
"first": "Martha",
"middle": [
"D\u00eds"
],
"last": "Brandt",
"suffix": ""
},
{
"first": "Hrafn",
"middle": [],
"last": "Loftsson",
"suffix": ""
},
{
"first": "Hlynur",
"middle": [],
"last": "Sigur\u00fe\u00f3rsson",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"M"
],
"last": "Tyers",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 15th Annual conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "217--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martha D\u00eds Brandt, Hrafn Loftsson, Hlynur Sigur\u00fe\u00f3rs- son, and Francis M. Tyers. 2011. Apertium-IceNLP: A rule-based Icelandic to English machine transla- tion system. In Proceedings of the 15th Annual conference of the European Association for Machine Translation, pages 217-224, Leuven, Belgium.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Integrated Parallel Sentence and Fragment Extraction from Comparable Corpora: A Case Study on Chinese-Japanese Wikipedia",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2015,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2833089"
]
},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu, Toshiaki Nakazawa, and Sadao Kuro- hashi. 2015. Integrated Parallel Sentence and Frag- ment Extraction from Comparable Corpora: A Case Study on Chinese-Japanese Wikipedia. ACM Trans- actions on Asian and Low-Resource Language Infor- mation Processing, 15(2).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word Alignment by Fine-tuning Embeddings on Parallel Corpora",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zi",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "2112--2128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zi-Yi Dou and Graham Neubig. 2021. Word Align- ment by Fine-tuning Embeddings on Parallel Cor- pora. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 2112-2128, Online.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A Simple, Fast, and Effective Reparameter- ization of IBM Model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, Georgia.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language-agnostic bert sentence embedding. ArXiv, abs",
"authors": [
{
"first": "Fangxiaoyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yin-Fei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"Matthew"
],
"last": "Cer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangxiaoyu Feng, Yin-Fei Yang, Daniel Matthew Cer, N. Arivazhagan, and Wei Wang. 2020. Language-agnostic bert sentence embedding. ArXiv, abs/2007.01852.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised Parallel Sentence Extraction with Parallel Segment Detection Helps Machine Translation",
"authors": [
{
"first": "Viktor",
"middle": [],
"last": "Hangya",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1224--1234",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1118"
]
},
"num": null,
"urls": [],
"raw_text": "Viktor Hangya and Alexander Fraser. 2019. Unsuper- vised Parallel Sentence Extraction with Parallel Seg- ment Detection Helps Machine Translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1224- 1234, Florence, Italy.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Nefnir: A high accuracy lemmatizer for Icelandic",
"authors": [
{
"first": "Hrafn",
"middle": [],
"last": "Svanhv\u00edt Lilja Ing\u00f3lfsd\u00f3ttir",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Loftsson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "310--315",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Svanhv\u00edt Lilja Ing\u00f3lfsd\u00f3ttir, Hrafn Loftsson, J\u00f3n Fri\u00f0rik Da\u00f0ason, and Krist\u00edn Bjarnad\u00f3ttir. 2019. Nefnir: A high accuracy lemmatizer for Icelandic. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 310-315, Turku, Finland.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Experimenting with different machine translation models in mediumresource settings",
"authors": [
{
"first": "Haukur",
"middle": [],
"last": "Haukur P\u00e1ll J\u00f3nsson",
"suffix": ""
},
{
"first": "V\u00e9steinn",
"middle": [],
"last": "Barri S\u00edmonarson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Snaebjarnarson",
"suffix": ""
}
],
"year": 2020,
"venue": "Text, Speech, and Dialogue -23rd International Conference",
"volume": "2020",
"issue": "",
"pages": "95--103",
"other_ids": {
"DOI": [
"10.1007/978-3-030-58323-1_10"
]
},
"num": null,
"urls": [],
"raw_text": "Haukur P\u00e1ll J\u00f3nsson, Haukur Barri S\u00edmonarson, V\u00e9steinn Snaebjarnarson, Stein\u00fe\u00f3r Steingr\u00edmsson, and Hrafn Loftsson. 2020. Experimenting with different machine translation models in medium- resource settings. In Text, Speech, and Dialogue -23rd International Conference, TSD 2020, Brno, Czech Republic, September 8-11, 2020, Proceed- ings, volume 12284 of Lecture Notes in Computer Science, pages 95-103.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Marian: Cost-effective High-Quality Neural Machine Translation in C++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Aue",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "129--135",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2716"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Kenneth Heafield, Hieu Hoang, Roman Grundkiewicz, and Anthony Aue. 2018. Marian: Cost-effective High-Quality Neural Machine Translation in C++. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 129-135, Melbourne, Australia.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Extracting an English-Persian parallel corpus from comparable corpora",
"authors": [
{
"first": "Akbar",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Bahram Sadeghi",
"middle": [],
"last": "Bigham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "3477--3482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akbar Karimi, Ebrahim Ansari, and Bahram Sadeghi Bigham. 2018. Extracting an English- Persian parallel corpus from comparable corpora. In Proceedings of the Eleventh International Con- ference on Language Resources and Evaluation (LREC 2018), pages 3477-3482, Miyazaki, Japan.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic Filtering of Bilingual Corpora for Statistical Machine Translation",
"authors": [
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "263--274",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/11428817_24"
]
},
"num": null,
"urls": [],
"raw_text": "Shahram Khadivi and Hermann Ney. 2005. Automatic Filtering of Bilingual Corpora for Statistical Ma- chine Translation. In Natural Language Process- ing and Information Systems, pages 263-274, Berlin, Heidelberg. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the Impact of Various Types of Noise on Neural Machine Translation",
"authors": [
{
"first": "Huda",
"middle": [],
"last": "Khayrallah",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "74--83",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2709"
]
},
"num": null,
"urls": [],
"raw_text": "Huda Khayrallah and Philipp Koehn. 2018. On the Im- pact of Various Types of Noise on Neural Machine Translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83, Melbourne, Australia.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "When and Why is Unsupervised Neural Machine Translation Useless?",
"authors": [
{
"first": "Yunsu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Gra\u00e7a",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunsu Kim, Miguel Gra\u00e7a, and Hermann Ney. 2020. When and Why is Unsupervised Neural Machine Translation Useless? In Proceedings of the 22nd",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annual Conference of the European Association for Machine Translation",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "35--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Conference of the European Association for Machine Translation, pages 35-44, Lisboa, Portu- gal.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Phrase-Based & Neural Unsupervised Machine Translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1549"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018. Phrase-Based & Neural Unsupervised Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 5039-5049, Brussels, Belgium.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Crowdsourcing High-Quality Parallel Data Extraction from Twitter",
"authors": [
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Isabel",
"middle": [],
"last": "Trancoso",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "426--436",
"other_ids": {
"DOI": [
"10.3115/v1/W14-3356"
]
},
"num": null,
"urls": [],
"raw_text": "Wang Ling, Lu\u00eds Marujo, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2014. Crowdsourcing High- Quality Parallel Data Extraction from Twitter. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 426-436, Baltimore, Maryland, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Measuring sentence parallelism using mahalanobis distances: The NRC unsupervised submissions to the WMT18 parallel corpus filtering shared task",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": ""
},
{
"first": "Darlene",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Simard",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "900--907",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6480"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Littell, Samuel Larkin, Darlene Stewart, Michel Simard, Cyril Goutte, and Chi-kiu Lo. 2018. Mea- suring sentence parallelism using mahalanobis dis- tances: The NRC unsupervised submissions to the WMT18 parallel corpus filtering shared task. In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 900-907, Bel- gium, Brussels.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fada: Fast document aligner using word embedding",
"authors": [
{
"first": "Pintu",
"middle": [],
"last": "Lohar",
"suffix": ""
},
{
"first": "Debasis",
"middle": [],
"last": "Ganguly",
"suffix": ""
},
{
"first": "Haithem",
"middle": [],
"last": "Afli",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gareth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2016,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "106",
"issue": "",
"pages": "169--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pintu Lohar, Debasis Ganguly, Haithem Afli, Andy Way, and Gareth J. F. Jones. 2016. Fada: Fast doc- ument aligner using word embedding. The Prague Bulletin of Mathematical Linguistics, 106:169-179.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Alibaba Submission to the WMT20 Parallel Corpus Filtering Task",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yangbin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yuqi",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "979--984",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Lu, Xin Ge, Yangbin Shi, and Yuqi Zhang. 2020. Alibaba Submission to the WMT20 Parallel Corpus Filtering Task. In Proceedings of the Fifth Confer- ence on Machine Translation, pages 979-984, On- line.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "On the generalized distance in statistics",
"authors": [
{
"first": "Prasanta",
"middle": [],
"last": "Chandra Mahalanobis",
"suffix": ""
}
],
"year": 1936,
"venue": "Proceedings of the National Institute of Sciences (Calcutta)",
"volume": "2",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prasanta Chandra Mahalanobis. 1936. On the gener- alized distance in statistics. Proceedings of the Na- tional Institute of Sciences (Calcutta), 2:49-55.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Efficient word alignment with Markov Chain Monte Carlo",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "\u00d6stling",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Prague Bulletin of Mathematical Linguistics",
"volume": "106",
"issue": "",
"pages": "125--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert \u00d6stling and J\u00f6rg Tiedemann. 2016. Effi- cient word alignment with Markov Chain Monte Carlo. Prague Bulletin of Mathematical Linguistics, 106:125-146.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bleu: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Anoop Kunchukuttan, Pratyush Kumar, and M. Khapra. 2021. Samanantar: The Largest Publicly Available Parallel Corpora Collection for",
"authors": [
{
"first": "Gowtham",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Sumanth",
"middle": [],
"last": "Doddapaneni",
"suffix": ""
},
{
"first": "Aravinth",
"middle": [],
"last": "Bheemaraj",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Jobanputra",
"suffix": ""
},
{
"first": "Ajitesh",
"middle": [],
"last": "Ak Raghavan",
"suffix": ""
},
{
"first": "Sujit",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sahoo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Harshita Diddee",
"suffix": ""
},
{
"first": "Divyanshu",
"middle": [],
"last": "Mahalakshmi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kakwani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, AK Raghavan, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, J. Mahalakshmi, Divyanshu Kakwani, Navneet Ku- mar, Aswin Pradeep, Kumar Deepak, Vivek Ragha- van, Anoop Kunchukuttan, Pratyush Kumar, and M. Khapra. 2021. Samanantar: The Largest Pub- licly Available Parallel Corpora Collection for 11 In- dic Languages. ArXiv, abs/2104.05596.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Filtering and Mining Parallel Data in a Joint Multilingual Space",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "228--234",
"other_ids": {
"DOI": [
"10.18653/v1/P18-2037"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk. 2018. Filtering and Mining Paral- lel Data in a Joint Multilingual Space. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 228-234, Melbourne, Australia.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Wiki-Matrix: Mining 135M Parallel Sentences",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2021. Wiki- Matrix: Mining 135M Parallel Sentences in 1620",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Language Pairs from Wikipedia",
"authors": [],
"year": null,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1351--1361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Language Pairs from Wikipedia. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1351-1361, Online.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Risam\u00e1lheild: A Very Large Icelandic Text Corpus",
"authors": [
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
},
{
"first": "Sigr\u00fan",
"middle": [],
"last": "Helgad\u00f3ttir",
"suffix": ""
},
{
"first": "Eir\u00edkur",
"middle": [],
"last": "R\u00f6gnvaldsson",
"suffix": ""
},
{
"first": "Starka\u00f0ur",
"middle": [],
"last": "Barkarson",
"suffix": ""
},
{
"first": "J\u00f3n",
"middle": [],
"last": "Gu\u00f0nason",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "4361--4366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, Sigr\u00fan Helgad\u00f3ttir, Eir\u00edkur R\u00f6gnvaldsson, Starka\u00f0ur Barkarson, and J\u00f3n Gu\u00f0- nason. 2018. Risam\u00e1lheild: A Very Large Icelandic Text Corpus. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), pages 4361-4366, Miyazaki, Japan.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "CombAlign: a Tool for Obtaining High-Quality Word Alignments",
"authors": [
{
"first": "Stein\u00fe\u00f3r",
"middle": [],
"last": "Steingr\u00edmsson",
"suffix": ""
},
{
"first": "Hrafn",
"middle": [],
"last": "Loftsson",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)",
"volume": "",
"issue": "",
"pages": "64--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stein\u00fe\u00f3r Steingr\u00edmsson, Hrafn Loftsson, and Andy Way. 2021. CombAlign: a Tool for Obtaining High- Quality Word Alignments. In Proceedings of the 23rd Nordic Conference on Computational Linguis- tics (NoDaLiDa), pages 64-73, Reykjavik, Iceland (Online).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Tunable Distortion Limits and Corpus Cleaning for SMT",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "225--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Stymne, Christian Hardmeier, J\u00f6rg Tiedemann, and Joakim Nivre. 2013. Tunable Distortion Limits and Corpus Cleaning for SMT. In Proceedings of the Eighth Workshop on Statistical Machine Trans- lation, pages 225-231, Sofia, Bulgaria.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Vecalign: Improved Sentence Alignment in Linear Time and Space",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1342--1348",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1136"
]
},
"num": null,
"urls": [],
"raw_text": "Brian Thompson and Philipp Koehn. 2019. Vecalign: Improved Sentence Alignment in Linear Time and Space. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1342-1348, Hong Kong, China.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "OPUS-MT -Building open translation services for the World",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Santhosh",
"middle": [],
"last": "Thottingal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "479--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann and Santhosh Thottingal. 2020. OPUS-MT -Building open translation services for the World. In Proceedings of the 22nd Annual Con- ference of the European Association for Machine Translation, pages 479-480, Lisboa, Portugal.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "ISLEX -a Multilingual Web Dictionary",
"authors": [
{
"first": "\u00de\u00f3rd\u00eds",
"middle": [],
"last": "\u00dalfarsd\u00f3ttir",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "2820--2825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00de\u00f3rd\u00eds \u00dalfarsd\u00f3ttir. 2014. ISLEX -a Multilingual Web Dictionary. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC'14), pages 2820-2825, Reyk- javik, Iceland.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Multi-domain machine translation enhancements by parallel data extraction from comparable corpora",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Wolk",
"suffix": ""
},
{
"first": "Emilia",
"middle": [],
"last": "Rejmund",
"suffix": ""
},
{
"first": "Krzysztof",
"middle": [],
"last": "Marasek",
"suffix": ""
}
],
"year": 2016,
"venue": "Polish-Language Parallel Corpora",
"volume": "",
"issue": "",
"pages": "157--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Wolk, Emilia Rejmund, and Krzysztof Marasek. 2016. Multi-domain machine trans- lation enhancements by parallel data extraction from comparable corpora. In Ewa Gruszczy\u0144ska and Agnieszka Le\u0144ko-Szyma\u0144ska, editors, Polish- Language Parallel Corpora, pages 157-179. Insty- tut Lingwistyki Stosowanej, Warsaw, Poland.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A Hybrid of Sentence-Level Approach and Fragment-Level Approach of Parallel Text Extraction from Comparable Text",
"authors": [
{
"first": "Yin-Lai",
"middle": [],
"last": "Yeong",
"suffix": ""
},
{
"first": "Tien-Ping",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Keng Hoon",
"middle": [],
"last": "Gan",
"suffix": ""
}
],
"year": 2019,
"venue": "Procedia Computer Science",
"volume": "161",
"issue": "",
"pages": "406--414",
"other_ids": {
"DOI": [
"10.1016/j.procs.2019.11.139"
]
},
"num": null,
"urls": [],
"raw_text": "Yin-Lai Yeong, Tien-Ping Tan, and Keng Hoon Gan. 2019. A Hybrid of Sentence-Level Approach and Fragment-Level Approach of Parallel Text Extrac- tion from Comparable Text. Procedia Computer Sci- ence, 161:406-414.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Word Alignment Based Parallel Corpora Evaluation and Cleaning Using Machine Learning Techniques",
"authors": [
{
"first": "",
"middle": [],
"last": "Ieva Zarin",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "P\u0113teris",
"suffix": ""
},
{
"first": "Raivis",
"middle": [],
"last": "Skadin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 18th Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "185--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ieva Zarin , a, P\u0113teris N , ikiforovs, and Raivis Skadin , \u0161. 2015. Word Alignment Based Parallel Corpora Eval- uation and Cleaning Using Machine Learning Tech- niques. In Proceedings of the 18th Annual Confer- ence of the European Association for Machine Trans- lation, pages 185-192, Antalya, Turkey.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Towards Preparation of the Second BUCC Shared Task: Detecting Parallel Sentences in Comparable Corpora",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Sharoff",
"suffix": ""
},
{
"first": "Reinhard",
"middle": [],
"last": "Rapp",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Ninth Workshop on Building and Using Comparable Corpora",
"volume": "",
"issue": "",
"pages": "38--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2016. Towards Preparation of the Second BUCC Shared Task: Detecting Parallel Sentences in Comparable Corpora. In Proceedings of the Ninth Workshop on Building and Using Comparable Cor- pora, pages 38-43, Portoro\u017e, Slovenia. ELDA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "The system setup. English and Icelandic monolingual data are aligned by the CLIR system which outputs candidate pairs which are scored and a classifier outputs parallel sentence pairs.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "BLEU score for MarianMT models training with supplementary data, with different WAScore thresholds over the combined test sets.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table/>",
"text": "Precision, Recall F 1 -measure and number of extracted sentences for a union and intersection of the FaDA output.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td/><td colspan=\"2\">Wikipedia Training</td><td/><td/><td/></tr><tr><td>Training</td><td colspan=\"5\">Supplemental TestEEA TestEMA TestOS Combined</td></tr><tr><td>Data</td><td>Sentences</td><td/><td/><td/><td/></tr><tr><td>ParIce50K</td><td>0</td><td>9.0</td><td>9.0</td><td>1.6</td><td>8.1</td></tr><tr><td>ParIce50K+WikiMatrix</td><td>313, 875</td><td>5.6</td><td>5.2</td><td>2.3</td><td>5.1</td></tr><tr><td>ParIce50K+Our approach</td><td>55, 744</td><td>13.9</td><td>15.9</td><td>7.0</td><td>13.7</td></tr></table>",
"text": "Precision, Recall and F 1 -measure as measured when only looking at the sentence pairs marked as parallel in the test data, and when the partially parallel have been added to the desired output.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "BLEU scores for MT systems trained on parallel data and sentences extracted from comparable corpora.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}