ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.122.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:53:20.823175Z"
},
"title": "Towards Precise Lexicon Integration in Neural Machine Translation",
"authors": [
{
"first": "Og\u00fcn",
"middle": [],
"last": "\u00d6z",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cobrainer GmbH Munich",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Sukhareva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Siemens AG",
"location": {
"settlement": "Nuremberg",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Terminological consistency is an essential requirement for industrial translation. Highquality, hand-crafted terminologies contain entries in their nominal forms. Integrating such a terminology into machine translation is not a trivial task. The MT system must be able to disambiguate homographs on the source side and choose the correct wordform on the target side. In this work, we propose a simple but effective method for homograph disambiguation and a method of wordform selection by introducing multi-choice lexical constraints. We also propose a metric to measure the terminological consistency of the translation. Our results have a significant improvement over the current SOTA in terms of terminological consistency without any loss of the BLEU score. All the code used in this work will be published as open-source.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Terminological consistency is an essential requirement for industrial translation. Highquality, hand-crafted terminologies contain entries in their nominal forms. Integrating such a terminology into machine translation is not a trivial task. The MT system must be able to disambiguate homographs on the source side and choose the correct wordform on the target side. In this work, we propose a simple but effective method for homograph disambiguation and a method of wordform selection by introducing multi-choice lexical constraints. We also propose a metric to measure the terminological consistency of the translation. Our results have a significant improvement over the current SOTA in terms of terminological consistency without any loss of the BLEU score. All the code used in this work will be published as open-source.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The importance of consistent terminology has long been discussed by translation experts (Dagan and Church, 1994; Merkel, 1998; Itagaki et al., 2007; Saraireh, 2001; Byrne, 2006) . Terminological standardisation is a critical task for technical and nontechnical industrial translation. Patents, technical manuals, and medical instructions rely on consistent usage of technical terminology. But also nontechnical news releases, marketing texts, promotion materials, legal and financial documents need to adhere to the same terminology. Byrne (2006) correctly points out that many large companies have their own terminologies that should be used in all texts. Such terminologies prescribe the correct usage of terms and provide not only a list of words that are to be used but also a list of their synonyms that should not be used by writers and translators (so-called negative terms). Sukhareva et al. (2020) describe such terminology for an automotive company and its usage in detail. Not adhering to these rules can be not only confusing for a reader but can also lead to serious legal and financial consequences if it is proven that damage was caused by the ambiguity of the instructions.",
"cite_spans": [
{
"start": 88,
"end": 112,
"text": "(Dagan and Church, 1994;",
"ref_id": "BIBREF3"
},
{
"start": 113,
"end": 126,
"text": "Merkel, 1998;",
"ref_id": "BIBREF16"
},
{
"start": 127,
"end": 148,
"text": "Itagaki et al., 2007;",
"ref_id": "BIBREF11"
},
{
"start": 149,
"end": 164,
"text": "Saraireh, 2001;",
"ref_id": "BIBREF21"
},
{
"start": 165,
"end": 177,
"text": "Byrne, 2006)",
"ref_id": "BIBREF1"
},
{
"start": 534,
"end": 546,
"text": "Byrne (2006)",
"ref_id": "BIBREF1"
},
{
"start": 883,
"end": 906,
"text": "Sukhareva et al. (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Morphologically rich languages also pose a very practical problem for terminology integration: terminological entries are provided in their nominative singular form (Susanto et al., 2020) . The SOTA approaches rely on the assumption that the terminological entry can be found as is in the translated text. This is not the case for Slavic languages (e.g. Russian), for which finding the correct wordform on the target side is a key challenge for the terminology integration.",
"cite_spans": [
{
"start": 165,
"end": 187,
"text": "(Susanto et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Morphologically poor languages (e.g. English), on the contrary, pose a very different challenge. Homographs appear in such languages not only due to polysemy and homonymy but also due to poor derivational morphology (e.g. a report vs. to report), thus, becoming a very common phenomenon. Liu et al. (2018) show that SOTA neural machine translation (NMT) fails to resolve homography efficiently. Despite being a known issue, the problem has received very little attention from the research community, and we are currently not aware of any prior work that would explicitly address the problem of homographs in the context of terminology integration into machine translation. This paper focuses on the following issues: resolving homographs when the source language is morphologically poor, choosing the right wordform in the morphologically rich target language, and evaluating terminological consistency in the resulting translation. We show that our approach for homograph disambiguation and morphologically flexible lexical constraints significantly improves terminological consistency as compared to the current SOTA.",
"cite_spans": [
{
"start": 288,
"end": 305,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Previous work can be roughly divided into two groups: approaches that integrate lexicon during inference and approaches that integrate lexicon during training. A constrained decoding approach that has established itself as the SOTA in the past two years is Post and Vilar (2018) . They proposed the Dynamic Beam Allocation (DBA) strategy, which decreased the decoding time complexity to constant time in respect to the number of lexical constraints. The proposed algorithm aims to allocate banks dynamically, prioritising the beams that satisfy the most constraints. This algorithm only allows incorporating a single wordform of a constraint, as Dinu et al. (2019) discussed in their work. This is a notable disadvantage of this approach as it assumes an unrealistic precondition that the provided lexical constraints will be correctly inflected. This condition cannot be satisfied when translating into a morphologically rich language.",
"cite_spans": [
{
"start": 257,
"end": 278,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
},
{
"start": 646,
"end": 664,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On the contrary, training time approaches are more flexible in selecting the inflected forms. The SOTA in-training approaches tune a transformer model (Vaswani et al., 2017) towards producing translations that are biased towards an external lexicon. Song et al. (2019) proposed a simple way to copy target side terms into source sentences. Likewise, Dinu et al. (2019) suggested a source sentence modification method by replacing/appending target side terms using additional source factors. Nevertheless, these methods are only encouraging the model to use predefined target terms, whereas constrained decoding methods are enforcing terms' usage. Thus, it can be argued that in-training approaches are inferior to the constrained decoding methods in terms of straightforward terminology integration and, indeed, Dinu et al. (2019) report the terminology usage rate 6-9% less than the constrained decoding method. To ensure the appearance of terms in the output, Michon et al. (2020) use placeholders with the help of morphosyntactic annotations. Even though the approach is effective for choosing a correctly inflected form, it depends on the availability and performance of morphological analysers both in source and target languages.",
"cite_spans": [
{
"start": 151,
"end": 173,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 250,
"end": 268,
"text": "Song et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 350,
"end": 368,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 812,
"end": 830,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 962,
"end": 982,
"text": "Michon et al. (2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While all the aforementioned approaches have succeeded in improving the terminological consistency of translations, they essentially rely on a supervised selection of terminological entries. In other words, they assume that the homographs have already been resolved and a correct wordform is provided. Once the discussed approaches are set on a trial under realistic conditions, translation quality deteriorates. Word sense disambiguation is meanwhile a well-researched NLP task, and current stateof-the-art approaches can efficiently resolve homographs (Bohnet et al., 2018; Huang et al., 2019) but due to being time-consuming, are not applicable during translation inference.",
"cite_spans": [
{
"start": 554,
"end": 575,
"text": "(Bohnet et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 576,
"end": 595,
"text": "Huang et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For the training of the baseline NMT model, we used preprocessed bilingual WMT18 data 1 . We filtered out sentence pairs that have a length ratio of less than 1/3 or more than 3. We also applied language detection (langid) filtering (Lui and Baldwin, 2011) in a tolerant way: The sentence pairs for which langid could not predict the expected language in the first 10 predictions are filtered out. Finally, we removed 75,000 sentences with the worst alignment scores (Dyer et al., 2013) . All the reported models utilize WordPiece (Wu et al., 2016) for tokenisation. To fine-tune the hyperparameters of the model, we used newstest2014, newstest2018, and newstest2019 as development sets. Newstest2017 is reserved for reporting the results. Since EN \u2192 RU newstest2020 was not available during the time of our experiments, we used RU \u2192 EN test set including an additional test set (test-ts 2 ), as a second set to report the results.",
"cite_spans": [
{
"start": 233,
"end": 256,
"text": "(Lui and Baldwin, 2011)",
"ref_id": "BIBREF14"
},
{
"start": 467,
"end": 486,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 531,
"end": 548,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Corpus",
"sec_num": "3.1"
},
{
"text": "Despite dictionaries of negative and positive synonyms being standard resources used by industrial translators, they usually cannot be openly shared. Thus, in order to ensure the reproducibility and comparability with previous work, we decided to use openly available resources: WMT Corpus and Russian Wordnet. We believe that such an approximation does not diminish the fairness of the evaluation as we are not focusing on domain adaptation but solely on improving lexical consistency of translation, which is just as applicable to and observable on news translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminology Extraction",
"sec_num": "3.2"
},
{
"text": "Tab.1 describes the process of generating our pseudo-dictionary of positive and negative terms. and matched against the Russian Wordnet (Chernobay, 2018). We use fast_align (Dyer et al., 2013) to extract word alignments of Russian and English sides of the training set. We proceed with finding the English word that is most frequently aligned to all the synonyms in a synset (e.g. \"engine\" is the most frequent match to \"\u0434\u0432\u0438\u0433\u0430\u0442\u0435\u043b\u044c\" dvigatel' and \"\u043c\u043e\u0442\u043e\u0440\" motor). This leaves us with a lexical entry for the English word \"engine\" and its Russian translations, which are the WordNet synonyms. Finally, we labelled the most frequently aligned Russian synonym in this list as a positive term, and all other Russian synonyms as negative terms (e.g. \"\u0434\u0432\u0438\u0433\u0430\u0442\u0435\u043b\u044c\" dvigatel is labelled as a positive synonym). Thus, from now on, if an English sentence has a word that occurs in our dictionary, the translator should resort to using the positive term in the translation and avoid negative terms. An example of a terminology entry 3 can be found in Tab. 2.",
"cite_spans": [
{
"start": 173,
"end": 192,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Terminology Extraction",
"sec_num": "3.2"
},
{
"text": "We further matched the terminology entries in the bilingual training data and kept track of the cooccurrence counts of inflected words to obtain a one-to-many list of wordform candidates per entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Wordforms",
"sec_num": "3.3"
},
{
"text": "Only the first candidate could be used as a lexical ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of Wordforms",
"sec_num": "3.3"
},
{
"text": "The approach consists of two major steps. On the source side of the morphologically poor language, it solves the problem of frequent homographs by applying a homograph disambiguator. On the target side of the morphologically rich language, it ensures that the translated term is correctly inflected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Tab. 2 shows an entry in our terminology. All three Russian words are interchangeable synonyms in a certain context. But a straightforward string matching of word engine (Tab. 2) with an aim to force the translator to use a certain synonym in the target language would fail: the English word engine can also be used in the sense of a search engine (Fig. 1 ) which would have a Russian literal translation as \"search system\". In this case, the lexical constraint enforced by our terminology would not be correct prevails: \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u0435\u0442, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u044e\u0442, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u0442\u044c, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u043b\u0430 prevailing:",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 356,
"text": "(Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "\u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u044e\u0449\u0438\u0445, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u044e\u0449\u0438\u0435, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u0435\u0442, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u044e\u0449\u0435\u0435 prevailed:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "\u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u043b, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u043b\u0438, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u043b\u0430, \u043f\u0440\u0435\u043e\u0431\u043b\u0430\u0434\u0430\u043b\u043e and would cause poor translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "To mitigate this problem, we propose a homograph disambiguation method. Our homograph disambiguation task is simpler than standard wordsense disambiguation (WSD) tasks (e.g. Gloss-BERT (Huang et al., 2019) ) as it suffices to predict whether or not a certain word in the source sentence is used in the same sense as a terminology entry that has the same spelling and, unlike traditional WSD, there is no need to label all the possible senses of this word. We propose a word labelling model, similar to named entity recognition (NER) models, fine-tuned on BERT 4 (Devlin et al., 2019) having only two classes ( for Term and for Non-Term). The model tags all the words in a sentence in one forward pass.",
"cite_spans": [
{
"start": 185,
"end": 205,
"text": "(Huang et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "In order to create the training data for the homograph disambiguation, we used the same parallel corpus that we used for training the machine translation models. All the training data were processed with a word aligner fast_align (Dyer et al., 2013) . All the sentences were lemmatised. Every lemma in the Russian sentence was compared against the extracted terminology (Sec. 3.2). If it is found in the terminology as a positive or negative term, we check whether the aligned English lemma is also listed as its translation (Tab. 2). If this is the case, the English word is labelled as \"Term\", otherwise as \"non-Term\" (Fig. 1) .",
"cite_spans": [
{
"start": 230,
"end": 249,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 620,
"end": 628,
"text": "(Fig. 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "The BERT homograph tagger is fine-tuned for 4 epochs on this data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Homograph Disambiguation for the Morphological Poor Language",
"sec_num": "4.1"
},
{
"text": "As described in the Sec. 2, the Dynamic Beam Allocation (DBA) 5 runs in constant time with respect to the number of constraints. The DBA accepts a list of constraint pairs (i.e. a term and its translation). During decoding, the candidates are grouped into banks with the number of banks equal to the number of constraints. If a term is found in the source sentence, then the translation candidates in which term's translation occurs are propagated to a higher bank. The best translation is chosen from the bank with the highest rank (i.e. the ones that have the most satisfied constraints). The drawbacks of this approach is that it matches words without their context and can neither discriminate between homographs (addressed in the previous section) nor choose the correct inflection. As it forces a higher score on the translations that are compliant with the constraint list, the approach is not applicable to translating from a morphologically poor to a morphologically rich language as on one hand there are plenty of homographs on the source side and on the other hand there is a multitude of inflected wordforms on a target side. Constraining a translation on a wrong wordform (e.g., a nominative noun form instead of a dative form) would result in a translator giving a top score to a poor translation. We propose multi-choice lexical constraints approach that overcomes DBA's limitations and enables the translator to deal with morphologically rich languages by choosing a correct wordform. Similarly to (Post and Vilar, 2018) , during inference we allocate candidates to banks. We find the longest possible (in terms of the number of tokens) candidate for every constraint to make sure there will be enough banks for all the possible constraints. Then to prioritise the entirely satisfied constraint phrases regardless of their token count, we rewarded them with the token count of the longest candidate. Without this change, the allocation strategy would be biased towards longer candidates.",
"cite_spans": [
{
"start": 1515,
"end": 1537,
"text": "(Post and Vilar, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Morphology Integration for the Morphologically Rich Language",
"sec_num": "4.2"
},
{
"text": "The algorithm requires multiple banks to allocate candidate hypotheses. In the worst case, all the longest candidates would need a seat in the bank. For this reason, the number of constraints is the sum of the byte pair encoding (BPE) token counts of the longest constraint options. The size is calculated once since the constraint list remains unchanged during decoding. The number of constraints is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of constraints",
"sec_num": null
},
{
"text": "= \u2208 max | | (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of constraints",
"sec_num": null
},
{
"text": "where is the constraint list, and is a constraint candidate in multi-choice lexical constraints (MLC) algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of constraints",
"sec_num": null
},
{
"text": "The satisfied constraint count of hypotheses decides in which bank they should be allocated. The number of banks equals to the maximum possible count if all the longest constraint variants are to be satisfied. However, as the algorithm is biased towards prioritising sentences with the most satisfied constraints, such sentences are longer and have higher overall cross-entropy loss. It causes a significant drop in the general quality of translations, especially if BPE tokenisation is used as more frequent Figure 1 : Labelling of the training data for homograph disambiguation: English words that are aligned to a synonym in Russian Wordnet synset are labelled as terms, otherwise they are considered to be homographs. \"\u0414\u0432\u0438\u0433\u0430\u0442\u0435\u043b\u044c\" \"dvigatel'\" and \"\u043c\u043e\u0442\u043e\u0440\" \"motor\" are found in the dictionary, while \"\u0441\u0438\u0441\u0442\u0435\u043c\u0430\" \"sistema\" and \"\u043c\u043e\u0442\u043e\u0440\u043d\u044b\u0439\" \"motornyi\" are not.",
"cite_spans": [],
"ref_spans": [
{
"start": 509,
"end": 517,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "tokens are usually represented with fewer BPE tokens. To overcome this problem, we calculated the size of the satisfied constraints as follows: given ( ) is the list of the advanced token indices of the constraint 's variant, the number of satisfied constraints in a hypothesis is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 max | |, if c is entirely met. max ( ), if c is advanced. 0, otherwise. _ = \u2208 ( )",
"eq_num": "(2)"
}
],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "Set of allowed constraints We keep track of the advanced constraint to make sure we will advance on started but not entirely met constraints. However, when we have multiple variants for a constraint, even if the advanced constraint is known, we might have multiple variants of that constraint as advanced but not fulfilled yet. Therefore, we track the number of advanced tokens for all variants of the constraints. Finally, the set of allowed constraints is defined as the next tokens of all the advanced variants of the advanced constraint. If there is no advanced constraint, the set is simply the initial tokens of all the constraint options. The set ( ) of all the allowed token indices is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "( ) = ( ) + 1, \u2203 with advanced o.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "0 for all c, otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "Advancing on constraints The major difference to the DBA approach is that the advanced constraints have a list of variants on which the algorithm can advance in one step. Therefore, when there is an advanced constraint, all variants are considered as a possible advancement step. For instance, if the initial tokens of the constraint in example (1) are already advanced ( \u043f\u043e\u0440, ##\u0430\u0436, ) in decoding time step , the algorithm advances on that constraint. The following tokens of both candidates are advanced together for the same hypothesis, which is a usual case when the choices have the same stem, and the only difference is the inflections. Its benefit is not only improving decoding run-time but also distributing the hypotheses more efficiently in the beams. Fig. 2 shows that the run time of the MLC algorithm is comparable with the DBA (Post and Vilar, 2018) in different beam size settings and with different number of wordform choices. ",
"cite_spans": [
{
"start": 841,
"end": 863,
"text": "(Post and Vilar, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 762,
"end": 768,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Number of satisfied constraints",
"sec_num": null
},
{
"text": "All the models in our experiments were trained in the SOCKEYE 6 toolkit (Hieber et al., 2017) . The models that incorporate 6-layer, 8-head transformer architecture are trained 50 epochs on the training corpus (10,402,336 bilingual sentences after preprocessing). We modified the SOCKEYE toolkit to add the multi-choice lexical constraints algorithm and are going to publish the extension as an opensource. For translation quality evaluation, we report BLEU score (Papineni et al., 2002) using SACREBLEU (Post, 2018), 7 after detokenising the translations. Following Post and Vilar (2018) , Dinu et al. (2019) , and Susanto et al. (2020) , we also report the terminology usage rate to evaluate terminological consistency.",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "(Hieber et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 464,
"end": 487,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 567,
"end": 588,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
},
{
"start": 591,
"end": 609,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 616,
"end": 637,
"text": "Susanto et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Both BLEU score and terminological usage rate (Post and Vilar, 2018) are not sufficient to evaluate terminological consistency. The usage rate has proven to be seriously flawed as this metric does not account for homographs. Tab. 4 shows an example of a sentence translation that includes a homograph rest in its source sentence. Our terminology prescribes translating rest as a Russian adjective meaning \"remaining\" and does not contain an entry that would have the same meaning as its homograph verb to rest. The terminology usage rate used in the previous research was calculated in a rather straightforward manner by mere string matching. In our example, it would mean that the metric would only give a perfect score if the verb rest was incorrectly translated as its homograph adjective. If this were the case, despite the perfect score, the resulting translation would be of a very 6 https://github.com/awslabs/sockeye/tree/ sockeye_1",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Post and Vilar, 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "7 The signature is BLEU+case.mixed+lang.en-ru+numrefs.1+smooth.exp+test.wmt17+tok.13a+version.1.4.14 poor quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "As Dougal and Lonsdale (2020) discuss, it is necessary to report an f-score metric when evaluating lexicon injected systems. Their suggested metric TREU intends to mitigate the negative effect of unmatched terminology tokens on BLEU metric assuming the reference sentences do not usually contain terminology promoted tokens. However, to assess the general quality of MT systems clearly, we find it more suitable to use the standard BLEU score. Thus, we require a separate metric based on the precision and recall of the terminology usage.",
"cite_spans": [
{
"start": 3,
"end": 29,
"text": "Dougal and Lonsdale (2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "We propose a terminological f-score to account for precision and recall of the terminology usage in the hypotheses as compared to the reference translations. A similar metric was suggested to evaluate the performance of NMT models for the handling of homographs (Liu et al., 2018) . The major difference between our metric and theirs is that we focus on the sense of the word rather than the string by consider all the aligned WordNet synonyms in the reference sentences. The precision and recall per sentence are calculated as follows:",
"cite_spans": [
{
"start": 262,
"end": 280,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u2208 min | |, | |, | | | | = \u2208 min | |, | |, | | min | |, | |",
"eq_num": "(4)"
}
],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "where is the list of the terminological entries that occurred in the source sentence, | | is the occurrence number of terminology entry in the source sentence, | | is the occurrence number of the positive usage of that entry in the translation sentence, and | | is the occurrence number of both the positive and negative synonyms of the entry in the reference sentence. Thus, we calculate the precision and recall as 1/1 for the example in Tab. 4, whereas the terminology usage rate is 1/2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminological F-score",
"sec_num": "5.1"
},
{
"text": "Tab. 5 shows the results of the evaluation in terms of terminology usage rate, terminological f-scores, and BLEU scores for the newstest2017 and new-stest2020 testsets. The baseline is a vanilla transformer model trained with the same parameters as all the other models without integrating the terminological dictionary. For the in-training baselines, we reproduce on our data the source-factoring (SF) model with append strategy that was described by Dinu et al. (2019) . The inference time baseline is the lexical constraints (LC) approach by Post Vilar (2018). We compare the baselines with the following proposed contributions:",
"cite_spans": [
{
"start": 452,
"end": 470,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 545,
"end": 549,
"text": "Post",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Results",
"sec_num": "5.2"
},
{
"text": "1. Introducing homograph disambiguation (+BERT) as described in Sec. 4.1 2. Introducing multi-choice lexical constraints (MLC) for the inference approach as described in Sec. 4.2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Results",
"sec_num": "5.2"
},
{
"text": "The evaluation shows that previously proposed SOTA methods for lexica integration by Dinu et al. (2019) and Post and Vilar (2018) suffer from a large decrease in the BLEU score. It also shows that the term usage rate used in the previous research is essentially meaningless for measuring translation quality as even though it has a nearly perfect score for Post and Vilar (2018) , the BLEU score greatly dropped. Our approach, on the contrary, showed a significant improvement over all the baselines in terms of terminological f-score without decreasing translation quality. The reasons for the slight decrease of the BLEU score for MLC+BERT are discussed in detail in Sec. 5.3.",
"cite_spans": [
{
"start": 85,
"end": 103,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 108,
"end": 129,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
},
{
"start": 357,
"end": 378,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combining multi-choice lexical constraints and homograph disambiguation (MLC+BERT)",
"sec_num": "3."
},
{
"text": "For a better insight into the results, we manually inspected the Russian translations. One of the primary reasons why MLC+BERT had a slight drop in the BLEU score as compared to the vanilla baseline was that the WMT testset was not tailored to have consistent terminology. We are also not aware of any open-source MT evaluation dataset with terminological consistency in mind. The evaluation showed that this was the reason for the drop in BLEU. Tab. 6 shows translations for which the BLEU score is lower for the MLC+BERT model. This hypothesis was tested by calculating the BLEU score for a subset of test sentences that contain the positive term in the Russian reference translation (80% of newstest2017 and 85% of newstest2017). The results in showed that the difference in the BLEU score between the baseline and our model decreases by more than double if all the test sentences with negative terms are eliminated (see Appendix A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.3"
},
{
"text": "As compared to other baselines, our method greatly improves the quality of the translation for Post and Vilar (2018) and Dinu et al. (2019) . Post and Vilar (2018) baseline is particularly prone to hallucinate Lee et al. (2018) if a lexical constraint",
"cite_spans": [
{
"start": 95,
"end": 116,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
},
{
"start": 121,
"end": 139,
"text": "Dinu et al. (2019)",
"ref_id": "BIBREF6"
},
{
"start": 142,
"end": 163,
"text": "Post and Vilar (2018)",
"ref_id": "BIBREF20"
},
{
"start": 210,
"end": 227,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "5.3"
},
{
"text": "Kvyat parked his car in one of the safety zones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EN",
"sec_num": null
},
{
"text": "\u041a\u0432\u044f\u0442 \u043f\u0440\u0438\u043f\u0430\u0440\u043a\u043e\u0432\u0430\u043b \u043c\u0430\u0448\u0438\u043d\u0443 \u0432 \u043e\u0434\u043d\u043e\u0439 \u0438\u0437 \u0437\u043e\u043d \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438. car baseline \u041a\u0432\u044f\u0442 \u043f\u0440\u0438\u043f\u0430\u0440\u043a\u043e\u0432\u0430\u043b \u0441\u0432\u043e\u044e \u043c\u0430\u0448\u0438\u043d\u0443 \u0432 \u043e\u0434\u043d\u043e\u0439 \u0438\u0437 \u0437\u043e\u043d \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438. \u0430\u0432\u0442\u043e\u043c\u043e\u0431\u0438\u043b\u044c (pos) MLC+BERT \u041a\u0432\u044f\u0442 \u043f\u0440\u0438\u043f\u0430\u0440\u043a\u043e\u0432\u0430\u043b \u0441\u0432\u043e\u0439 \u0430\u0432\u0442\u043e\u043c\u043e\u0431\u0438\u043b\u044c \u0432 \u043e\u0434\u043d\u043e\u0439 \u0438\u0437 \u0437\u043e\u043d \u0431\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u0438. \u043c\u0430\u0448\u0438\u043d\u0430 (neg) Table 6 : An example from the newstest2020 evaluation set. The Russian gold sentence and the baseline contain a negative term. The MLC+BERT translation uses a positive interchangeable syllable. Even though the translation is perfectly fine, the BLEU score is lower for MLC+BERT.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Terminology RU",
"sec_num": null
},
{
"text": "is a homograph or is not correctly inflected (see Appendix B). In this case, the model generates an output till it reaches the maximum length. For example, the output of the LC baseline has 8% more characters than the reference translations. In comparison, the vanilla baseline has only 0.5% more characters and the MLC+BERT has exactly the same amount of characters. The manual evaluation showed that reducing hallucinations is the reason for the large increase of the BLEU as compared to the SF and LC baselines. We also examined the effect of automatically generated lexicon on the translation quality. While we found cases in which positive terms were not perfect synonyms and were not interchangeable with negative terms, the homograph disambiguation seemed to show certain robustness by labelling the English term only if they occurred in the context that was common for negative and positive Russian translations. While we still believe that better results could be achieved in real-life settings where a high-quality dictionary would be used, our examination showed that there was no unreasonable error propagation from the usage of an automatically extracted dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminology RU",
"sec_num": null
},
{
"text": "The greatest weakness that we found during qualitative examination lies in how the top inflected candidates are scored in MLC. The MLC model takes a list of top Russian wordforms that are most frequently aligned to a given English wordform of a term. In rare cases, an acceptable wordform does not appear to be in the top list. In this case, the translation ends up being grammatically incorrect or hallucinates in a similar sense as the LC baseline. A possible solution for this would be generating the top choices for MLC in a more elaborated manner e.g. by considering the position in the sentence or even using syntactic information. For now, we leave exploring those options for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Terminology RU",
"sec_num": null
},
{
"text": "The homograph disambiguator was trained on artificially created labels, and we are not in possession of any gold standard data for the direct evaluation. We assume that evaluating the approach on the artificially labelled data will not ensure the objectivity of such an evaluation and both train and testset will contain the same errors. For transparency, we still provide the scores in Appendix (Tab. 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Homograph Disambiguation",
"sec_num": "5.4"
},
{
"text": "Thus, measuring the effect of homograph disambiguator on the downstream translation task is more sound. To make sure that the improvement of the terminological f-score is caused by the homograph disambiguation and not by the reduction of the number of lexical constraints, we introduce the MLC random baseline (see Tab. 5). We have calculated the total amount of constrained terms after applying the homograph disambiguation (+BERT) and randomly labelled the same amount of terms to be constrained in the original testsets. The evaluation results showed that the f-score dropped by 7% for the randomly labelled dataset, thus, proving that our homograph disambiguation is the actual cause of the f-score's improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Homograph Disambiguation",
"sec_num": "5.4"
},
{
"text": "In order to ensure that MLC is also feasible for real-life usage, we compared the inference speed between the Post and Vilar (2018) and our MLC input (Fig. 2) . As well as the DBA algorithm, MLC makes sure that the number of hypotheses is limited by the beam size. Thus, the runtime complexity of our approach is constant in the number of constraints. We have made an interesting observation that MLC is actually faster than LC for the beam size of 5 and slightly slows down for the beam size of 10. We have found the following explanation for such behaviour: Lexical constraints expect a large beam size in order to be able to generate enough hypotheses with the provided lexical constraints. The DBA does not allow a beam to generate the end of sentence symbol unless the constraints are met. Once a translation is incorrectly constrained on a homograph or on a wordform that cannot occur in translation, the beam cannot terminate unless it reaches the maximum length, and, thus, it negatively influences the inference time. On the con-trary, the MLC allows a beam to terminate which makes it more time efficient.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "(Fig. 2)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Runtime Analysis",
"sec_num": "5.5"
},
{
"text": "We have presented an approach for terminology integration into a neural machine translation from a morphologically poor into a morphologically rich language. Our work makes the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "1. Disambiguation of the homographs in the morphologically poor language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "2. Multi-choice lexical constraints to ensure the correct choice of an inflected target wordform in the morphologically rich language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "3. A metric that takes into account precision and recall of terminology usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We propose a solution to the problem of rich morphology in the target language by presenting multi-choice lexical constraints and show that our combined approach (MLC+BERT) has a significantly 8 better f-score than all the other models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://data.statmt.org/wmt18/ translation-task/preprocessed/ru-en/ 2 newstest2020-ruen-src-ts.ru and newstest2020-ruen-refts.en",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "5 For a detailed description of the DBA, refer toPost and Vilar (2018)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The LC model generates a string after comma (marked in italics) that does not occur in the source text nor meaningful in the context. It happens because the lexicon prescribes to translate \"report\" as a noun meaning \"an account given of a particular matter\" \u0434\u043e\u043a\u043b\u0430\u0434, while the source actually has a homograph verb \"to report\". The LC model generates a correct translation and proceeds to hallucinate till it finally produces a sentence with \"a report\". It leads to not only longer nonsensical output but also to longer inference time. The homograph disambiguation (MLC + BERT) correctly marks \"report\" as a non-term, thus, preventing the model to force a constraint on this sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
},
{
"text": "Hallucination with a grammatically correct sentence EN As reported by Chempionat, the 41-year-old specialist flew into Moscow to weigh up the possibility of working at one of Russia's clubs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of error",
"sec_num": null
},
{
"text": "\u041a\u0430\u043a \u0441\u043e\u043e\u0431\u0449\u0430\u0435\u0442 \"\u0427\u0435\u043c\u043f\u0438\u043e\u043d\u0430\u0442\", 41-\u043b\u0435\u0442\u043d\u0438\u0439 \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u0438\u0441\u0442 \u043f\u0440\u0438\u043b\u0435\u0442\u0435\u043b \u0432 \u041c\u043e\u0441\u043a\u0432\u0443, \u0447\u0442\u043e\u0431\u044b \u0438\u0437\u0443\u0447\u0438\u0442\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u043d\u0430\u0439\u0442\u0438 \u0440\u0430\u0431\u043e\u0442\u0443 \u0432 \u043a\u0430\u043a\u043e\u043c-\u043d\u0438\u0431\u0443\u0434\u044c \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u043e\u043c \u043a\u043b\u0443\u0431\u0435.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RU",
"sec_num": null
},
{
"text": "\u041a\u0430\u043a \u0441\u043e\u043e\u0431\u0449\u0430\u0435\u0442 Chempionat, 41-\u0441\u0442\u0430\u0440\u044b\u0439 \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u0438\u0441\u0442 \u0432\u044b\u043b\u0435\u0442\u0435\u043b \u0432 \u041c\u043e\u0441\u043a\u0432\u0443, \u0447\u0442\u043e\u0431\u044b \u0432 \u0434\u043e\u043a\u043b\u0430\u0434\u0435 \u043f\u0440\u043e\u0430\u043d\u0430\u043b\u0438\u0437\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u0440\u0430\u0431\u043e\u0442\u044b \u0432 \u043e\u0434\u043d\u043e\u043c \u0438\u0437 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u0438\u0445 \u043a\u043b\u0443\u0431\u043e\u0432.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LC",
"sec_num": null
},
{
"text": "\u041a\u0430\u043a \u0441\u043e\u043e\u0431\u0449\u0430\u0435\u0442 Chempionat, 41-\u043b\u0435\u0442\u043d\u0438\u0439 \u0441\u043f\u0435\u0446\u0438\u0430\u043b\u0438\u0441\u0442 \u0432\u044b\u043b\u0435\u0442\u0435\u043b \u0432 \u041c\u043e\u0441\u043a\u0432\u0443, \u0447\u0442\u043e\u0431\u044b \u0432\u0437\u0432\u0435\u0441\u0438\u0442\u044c \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u0440\u0430\u0431\u043e\u0442\u044b \u0432 \u043e\u0434\u043d\u043e\u043c \u0438\u0437 \u0440\u043e\u0441\u0441\u0438\u0439\u0441\u043a\u0438\u0445 \u043a\u043b\u0443\u0431\u043e\u0432.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLC+BERT",
"sec_num": null
},
{
"text": "As in the previous example, the LC model forces to use the homograph noun \"a report\" to be a translation of the verb \"to report\". Unlike the example above, the model does not produce a correct translation at any point and generates a sentence with an entirely different meaning: \"As reported by Chempionat, the 41-year old specialist got on a flight to Moscow to analyse in his report possibilities of working at one of Russia's clubs.\" This kind of translations are particularly dangerous, as it would be extremely difficult for a native speaker without looking at the source to detect that the translation completely fails to convey the meaning. The homograph disambiguation solves this problem and the translation is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
},
{
"text": "Hallucination with an ungrammatical sentence EN Documents obtained by the publication, reveal that the owners of TikTok (ByteDance company) with the help of their app are promoting Chinese foreign policy goals overseas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of error",
"sec_num": null
},
{
"text": "RU \u0412 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0445, \u043e\u043a\u0430\u0437\u0430\u0432\u0448\u0438\u0445\u0441\u044f \u0443 \u0438\u0437\u0434\u0430\u043d\u0438\u044f, \u0440\u0430\u0441\u0441\u043a\u0430\u0437\u044b\u0432\u0430\u0435\u0442\u0441\u044f, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u0435\u0446 TikTok (\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u044f ByteDance) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u0435\u0442 \u0446\u0435\u043b\u0438 \u0432\u043d\u0435\u0448\u043d\u0435\u0439 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0438 \u041a\u0438\u0442\u0430\u044f \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c. baseline \u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b, \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u043f\u0443\u0431\u043b\u0438\u043a\u0430\u0446\u0438\u0435\u0439, \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u044e\u0442, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u044b TikTok (ByteDance company) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u0432\u043e\u0435\u0433\u043e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u044e\u0442 \u043a\u0438\u0442\u0430\u0439\u0441\u043a\u0438\u0435 \u0432\u043d\u0435\u0448\u043d\u0435\u043f\u043e\u043b\u0438\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u0446\u0435\u043b\u0438 \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1095",
"sec_num": null
},
{
"text": "\u0417\u0430\u0440\u0443\u0431\u0435\u0436\u043d\u044b\u0445 \u0438\u043d\u043e\u0441\u0442\u0440\u0430\u043d\u043d\u044b\u0445 \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u0435\u0432 \u043f\u043e\u043c\u043e\u0447\u044c \u0441\u043f\u043e\u0441\u043e\u0431\u0441\u0442\u0432\u043e\u0432\u0430\u0442\u044c \u043f\u043e\u043a\u0430\u0437\u0430\u0442\u044c \u0446\u0435\u043b\u0435\u0439 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 TikTok (ByteDance company), \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u0443\u044e \u0432 \u0440\u0435\u0437\u0443\u043b\u044c\u0442\u0430\u0442\u0435 \u043f\u0443\u0431\u043b\u0438\u043a\u0430\u0446\u0438\u0438, \u0432 \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u0438.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LC",
"sec_num": null
},
{
"text": "\u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b, \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u0438\u0437\u0434\u0430\u043d\u0438\u0435\u043c, \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u044e\u0442, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u044b \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 TikTok (ByteDance) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u0432\u043e\u0435\u0433\u043e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u044e\u0442 \u043a\u0438\u0442\u0430\u0439\u0441\u043a\u0438\u0435 \u0446\u0435\u043b\u0438 \u0432\u043d\u0435\u0448\u043d\u0435\u0439 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0438 \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLC+BERT",
"sec_num": null
},
{
"text": "The LC baseline forces to translate foreign as \u0438\u043d\u043e\u0441\u0442\u0440\u0430\u043d\u043d\u044b\u0439 which is not applicable in this context. The LC baseline generates a nonsensical sequence of words. This type of error is less harmful that the one described above as a native speaker can immediately spot that translation is incorrect. The MLC+BERT solves this problem and the translation is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
},
{
"text": "A wrong wordform as lexical constraint EN Documents obtained by the publication, reveal that the owners of TikTok (ByteDance company) with the help of their app are promoting Chinese foreign policy goals overseas. RU \u0412 \u0434\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u0430\u0445, \u043e\u043a\u0430\u0437\u0430\u0432\u0448\u0438\u0445\u0441\u044f \u0443 \u0438\u0437\u0434\u0430\u043d\u0438\u044f, \u0440\u0430\u0441\u0441\u043a\u0430\u0437\u044b\u0432\u0430\u0435\u0442\u0441\u044f, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u0435\u0446 TikTok (\u043a\u043e\u043c\u043f\u0430\u043d\u0438\u044f ByteDance) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u0435\u0442 \u0446\u0435\u043b\u0438 \u0432\u043d\u0435\u0448\u043d\u0435\u0439 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0438 \u041a\u0438\u0442\u0430\u044f \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c. baseline \u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b, \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u043f\u0443\u0431\u043b\u0438\u043a\u0430\u0446\u0438\u0435\u0439, \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u044e\u0442, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u044b TikTok (ByteDance company) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u0432\u043e\u0435\u0433\u043e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u044e\u0442 \u043a\u0438\u0442\u0430\u0439\u0441\u043a\u0438\u0435 \u0432\u043d\u0435\u0448\u043d\u0435\u043f\u043e\u043b\u0438\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u0446\u0435\u043b\u0438 \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c. LC+BERT \u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b, \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u0438\u0437\u0434\u0430\u043d\u0438\u0435\u043c, \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u044e\u0442, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u0435\u0432 \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 TikTok (ByteDance) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u0432\u043e\u0435\u0433\u043e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u044e\u0442 \u043a\u0438\u0442\u0430\u0439\u0441\u043a\u0438\u0435 \u0432\u043d\u0435\u0448\u043d\u0435\u043f\u043e\u043b\u0438\u0442\u0438\u0447\u0435\u0441\u043a\u0438\u0435 \u0446\u0435\u043b\u0435\u0439 \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of error",
"sec_num": null
},
{
"text": "\u0414\u043e\u043a\u0443\u043c\u0435\u043d\u0442\u044b, \u043f\u043e\u043b\u0443\u0447\u0435\u043d\u043d\u044b\u0435 \u0438\u0437\u0434\u0430\u043d\u0438\u0435\u043c, \u043f\u043e\u043a\u0430\u0437\u044b\u0432\u0430\u044e\u0442, \u0447\u0442\u043e \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u044b \u043a\u043e\u043c\u043f\u0430\u043d\u0438\u0438 TikTok (ByteDance) \u0441 \u043f\u043e\u043c\u043e\u0449\u044c\u044e \u0441\u0432\u043e\u0435\u0433\u043e \u043f\u0440\u0438\u043b\u043e\u0436\u0435\u043d\u0438\u044f \u043f\u0440\u043e\u0434\u0432\u0438\u0433\u0430\u044e\u0442 \u043a\u0438\u0442\u0430\u0439\u0441\u043a\u0438\u0435 \u0446\u0435\u043b\u0438 \u0432\u043d\u0435\u0448\u043d\u0435\u0439 \u043f\u043e\u043b\u0438\u0442\u0438\u043a\u0438 \u0437\u0430 \u0440\u0443\u0431\u0435\u0436\u043e\u043c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLC+BERT",
"sec_num": null
},
{
"text": "The error described in the previous example was resolved by the homograph disambiguation. However, the LC + BERT model produced a grammatically incorrect translation as the constraint for word \"owners\" was given in a wrongly inflected form of Genitive plural \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u0435\u0432 . The MLC+BERT solves this problem by providing a list of inflected forms and the result is a correct translation of the word in Nominative plural. Interestingly, the reference translation is incorrect and translates \"owners\" as singular nominative \"owner\". \u0432\u043b\u0430\u0434\u0435\u043b\u044c\u0446\u044b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
},
{
"text": "Inconsistent terminology usage in the test set EN Roman Zaripov, founder of the Our Digital agency, agreed with Bogdanov: \"The main rules for TikTok users are listed in the user agreement: no posting shocking content, discriminatory rhetoric and so on.\" RU \u0421 \u0411\u043e\u0433\u0434\u0430\u043d\u043e\u0432\u044b\u043c \u0441\u043e\u0433\u043b\u0430\u0448\u0430\u0435\u0442\u0441\u044f \u043e\u0441\u043d\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u0430\u0433\u0435\u043d\u0442\u0441\u0442\u0432\u0430 Our Didgital \u0420\u043e\u043c\u0430\u043d \u0417\u0430\u0440\u0438\u043f\u043e\u0432: \"\u041e\u0441\u043d\u043e\u0432\u043d\u044b\u0435 \u043f\u0440\u0430\u0432\u0438\u043b\u0430 \u0434\u043b\u044f \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439 TikTok \u043f\u0435\u0440\u0435\u0447\u0438\u0441\u043b\u044f\u0435\u0442 \u0432 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u0441\u043a\u043e\u043c \u0441\u043e\u0433\u043b\u0430\u0448\u0435\u043d\u0438\u0438: \u043d\u0435\u043b\u044c\u0437\u044f \u0432\u044b\u043a\u043b\u0430\u0434\u044b\u0432\u0430\u0442\u044c \u0448\u043e\u043a\u0438\u0440\u0443\u044e\u0449\u0438\u0439 \u043a\u043e\u043d\u0442\u0435\u043d\u0442, \u0434\u0438\u0441\u043a\u0440\u0438\u043c\u0438\u043d\u0438\u0440\u0443\u044e\u0449\u0438\u0435 \u0432\u044b\u0441\u043a\u0430\u0437\u044b\u0432\u0430\u043d\u0438\u044f \u0438 \u0442\u0430\u043a \u0434\u0430\u043b\u0435\u0435\". baseline \u0420\u043e\u043c\u0430\u043d \u0417\u0430\u0440\u0438\u043f\u043e\u0432, \u043e\u0441\u043d\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u043d\u0430\u0448\u0435\u0433\u043e \u0446\u0438\u0444\u0440\u043e\u0432\u043e\u0433\u043e \u0430\u0433\u0435\u043d\u0442\u0441\u0442\u0432\u0430, \u0441\u043e\u0433\u043b\u0430\u0441\u0438\u043b\u0441\u044f \u0441 \u0411\u043e\u0433\u0434\u0430\u043d\u043e\u0432\u044b\u043c : \"\u043e\u0441\u043d\u043e\u0432\u043d\u044b\u0435 \u043f\u0440\u0430\u0432\u0438\u043b\u0430 \u0434\u043b\u044f \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439 TikTok \u043f\u0435\u0440\u0435\u0447\u0438\u0441\u043b\u0435\u043d\u044b \u0432 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u0441\u043a\u043e\u043c \u0441\u043e\u0433\u043b\u0430\u0448\u0435\u043d\u0438\u0438: \u043d\u0438\u043a\u0430\u043a\u043e\u0433\u043e \u0440\u0430\u0437\u043c\u0435\u0449\u0435\u043d\u0438\u044f \u0448\u043e\u043a\u0438\u0440\u0443\u044e\u0449\u0435\u0433\u043e \u043a\u043e\u043d\u0442\u0435\u043d\u0442\u0430, \u0434\u0438\u0441\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u0440\u0438\u0442\u043e\u0440\u0438\u043a\u0438 \u0438 \u0442\u0430\u043a \u0434\u0430\u043b\u0435\u0435\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of error",
"sec_num": null
},
{
"text": "\u0420\u043e\u043c\u0430\u043d \u0417\u0430\u0440\u0438\u043f\u043e\u0432, \u043e\u0441\u043d\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u043d\u0430\u0448\u0435\u0433\u043e \u0446\u0438\u0444\u0440\u043e\u0432\u043e\u0433\u043e \u0430\u0433\u0435\u043d\u0442\u0441\u0442\u0432\u0430, \u0441\u043e\u0433\u043b\u0430\u0441\u0438\u043b\u0441\u044f \u0441 \u0411\u043e\u0433\u0434\u0430\u043d\u043e\u0432\u044b\u043c : \"\u0433\u043b\u0430\u0432\u043d\u044b\u043c\u0438 \u043f\u0440\u0430\u0432\u0438\u043b\u0430\u043c\u0438 \u0434\u043b\u044f \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u0435\u0439 TikTok \u044f\u0432\u043b\u044f\u044e\u0442\u0441\u044f \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c\u0441\u043a\u0438\u0435 \u0441\u043e\u0433\u043b\u0430\u0448\u0435\u043d\u0438\u044f: \u043d\u0438\u043a\u0430\u043a\u043e\u0433\u043e \u0440\u0430\u0437\u043c\u0435\u0449\u0435\u043d\u0438\u044f \u0448\u043e\u043a\u0438\u0440\u0443\u044e\u0449\u0435\u0433\u043e \u043a\u043e\u043d\u0442\u0435\u043d\u0442\u0430, \u0434\u0438\u0441\u043a\u0440\u0438\u043c\u0438\u043d\u0430\u0446\u0438\u043e\u043d\u043d\u043e\u0439 \u0440\u0438\u0442\u043e\u0440\u0438\u043a\u0438 \u0438 \u0442.\u0434\"..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLC+BERT",
"sec_num": null
},
{
"text": "Both baseline and MLC + BERT produced correct translations. Word \"main\" is prescribed to be translated as \u0433\u043b\u0430\u0432\u043d\u044b\u0439 by our terminology. However, in the baseline it is translated with a negative term \u043e\u0441\u043d\u043e\u0432\u043d\u043e\u0439 while both translations are correct, the BLEU score for our model will be penalized for using a synonym of a word used in the reference translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
},
{
"text": "Insufficient coverage by the lexicon EN This historic trajectory cannot be stopped by anyone or any force, said Xiaoguang.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type of error",
"sec_num": null
},
{
"text": "\u042d\u0442\u0430 \u0438\u0441\u0442\u043e\u0440\u0438\u0447\u0435\u0441\u043a\u0430\u044f \u0442\u0435\u043d\u0434\u0435\u043d\u0446\u0438\u044f \u043d\u0435 \u043c\u043e\u0436\u0435\u0442 \u0431\u044b\u0442\u044c \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u0430 \u043d\u0438\u043a\u0435\u043c \u0438 \u043d\u0438\u043a\u0430\u043a\u0438\u043c\u0438 \u0441\u0438\u043b\u0430\u043c\u0438, \u043f\u043e\u0434\u0447\u0435\u0440\u043a\u043d\u0443\u043b \u041c\u0430 \u0421\u044f\u043e\u0433\u0443\u0430\u043d. baseline \u042d\u0442\u0430 \u0438\u0441\u0442\u043e\u0440\u0438\u0447\u0435\u0441\u043a\u0430\u044f \u0442\u0440\u0430\u0435\u043a\u0442\u043e\u0440\u0438\u044f \u043d\u0435 \u043c\u043e\u0436\u0435\u0442 \u0431\u044b\u0442\u044c \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043b\u0435\u043d\u0430 \u043d\u0438 \u043a\u0435\u043c \u0438\u043b\u0438 \u043a\u0430\u043a\u043e\u0439-\u043b\u0438\u0431\u043e \u0441\u0438\u043b\u043e\u0439 , \u0441\u043a\u0430\u0437\u0430\u043b \u0421\u044f\u043e\u0443\u0433\u0443\u0430\u043d\u044c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RU",
"sec_num": null
},
{
"text": "\u042d\u0442\u0443 \u0438\u0441\u0442\u043e\u0440\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u0442\u0440\u0430\u0435\u043a\u0442\u043e\u0440\u0438\u044e \u043d\u0435\u043b\u044c\u0437\u044f \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u0438\u0442\u044c \u043d\u0438\u043a\u0435\u043c \u0438\u043b\u0438 \u043a\u0430\u043a\u043e\u0439-\u043b\u0438\u0431\u043e \u0441\u0438\u043b\u043e\u0439, \u0441\u043a\u0430\u0437\u0430\u043b \u0421\u044f\u043e\u0443\u0433\u0443\u0430\u043d\u044c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MLC+BERT",
"sec_num": null
},
{
"text": "The lexicon only includes \u043d\u0435\u043b\u044c\u0437\u044f as a positive term and \u043d\u0435\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e as a negative term. The Russian phrase \u043d\u0435 \u043c\u043e\u0436\u0435\u0442 \u0431\u044b\u0442\u044c is a valid translation but was not included in the Russian WordNet. While the homograph disambiguator correctly labelled the \"cannot\" as a term, it was not labelled as a positive term in the test data as neither positive nor negative term was aligned to it. This is a reason why we believe that the evaluation against the random baseline MLC+BERT random (Tab. 5) is more reliable than a mere f-score on the test set. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Morphosyntactic tagging with a meta-BiLSTM model over context sensitive token encodings",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Gon\u00e7alo",
"middle": [],
"last": "Sim\u00f5es",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Maynez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2642--2652",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1246"
]
},
"num": null,
"urls": [],
"raw_text": "Bernd Bohnet, Ryan McDonald, Gon\u00e7alo Sim\u00f5es, Daniel Andor, Emily Pitler, and Joshua Maynez. 2018. Morphosyntactic tagging with a meta- BiLSTM model over context sensitive token encod- ings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2642-2652, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Technical Translation: Usability Strategies for Translating Technical Documentation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Byrne. 2006. Technical Translation: Usability Strategies for Translating Technical Documentation. Springer Netherlands.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Building wordnet for russian language from ru.wiktionary",
"authors": [
{
"first": "Yuliya",
"middle": [],
"last": "Chernobay",
"suffix": ""
}
],
"year": 2018,
"venue": "Artificial Intelligence and Natural Language",
"volume": "",
"issue": "",
"pages": "113--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuliya Chernobay. 2018. Building wordnet for russian language from ru.wiktionary. In Artificial Intelli- gence and Natural Language, pages 113-120, Cham. Springer International Publishing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Termight: Identifying and translating technical terminology",
"authors": [
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 1994,
"venue": "Fourth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "34--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan and Kenneth Church. 1994. Termight: Iden- tifying and translating technical terminology. In Fourth Conference on Applied Natural Language Processing, pages 34-40.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 1947,
"venue": "Proceedings of the 2019 Conference 8 We used McNemar's significance test",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference 8 We used McNemar's significance test (McNemar, 1947).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The significant difference is defined as < 0.05. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The significant difference is defined as < 0.05. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Training neural machine translation to apply terminology constraints",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Prashant",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3063--3068",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1294"
]
},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Prashant Mathur, Marcello Federico, and Yaser Al-Onaizan. 2019. Training neural ma- chine translation to apply terminology constraints. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3063-3068, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving NMT quality using terminology injection",
"authors": [
{
"first": "Duane",
"middle": [
"K"
],
"last": "Dougal",
"suffix": ""
},
{
"first": "Deryle",
"middle": [],
"last": "Lonsdale",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4820--4827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duane K. Dougal and Deryle Lonsdale. 2020. Im- proving NMT quality using terminology injection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4820-4827, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A simple, fast, and effective reparameterization of IBM model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameter- ization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, Georgia. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.05690"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. arXiv preprint arXiv:1712.05690.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "GlossBERT: BERT for word sense disambiguation with gloss knowledge",
"authors": [
{
"first": "Luyao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3509--3514",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1355"
]
},
"num": null,
"urls": [],
"raw_text": "Luyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 3509-3514, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic validation of terminology translation consistency with statistical method. Proceedings of MT summit XI",
"authors": [
{
"first": "Masaki",
"middle": [],
"last": "Itagaki",
"suffix": ""
},
{
"first": "Takako",
"middle": [],
"last": "Aikawa",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "269--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaki Itagaki, Takako Aikawa, and Xiaodong He. 2007. Automatic validation of terminology transla- tion consistency with statistical method. Proceed- ings of MT summit XI, pages 269-274.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hallucinations in neural machine translation",
"authors": [
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Fannjiang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sussillo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fan- njiang, and David Sussillo. 2018. Hallucinations in neural machine translation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Handling homographs in neural machine translation",
"authors": [
{
"first": "Frederick",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1336--1345",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1121"
]
},
"num": null,
"urls": [],
"raw_text": "Frederick Liu, Han Lu, and Graham Neubig. 2018. Handling homographs in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1336-1345, New Or- leans, Louisiana. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Cross-domain feature selection for language identification",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "553--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Pro- ceedings of 5th International Joint Conference on Natural Language Processing, pages 553-561, Chi- ang Mai, Thailand. Asian Federation of Natural Lan- guage Processing.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {
"DOI": [
"10.1007/bf02295996"
]
},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or per- centages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Consistency and variation in technical translation: A study of translators' attitudes",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Merkel",
"suffix": ""
}
],
"year": 1998,
"venue": "Unity in diversity",
"volume": "",
"issue": "",
"pages": "137--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Merkel. 1998. Consistency and variation in technical translation: A study of translators' atti- tudes. Unity in diversity, pages 137-149.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Integrating domain terminology into neural machine translation",
"authors": [
{
"first": "Elise",
"middle": [],
"last": "Michon",
"suffix": ""
},
{
"first": "Josep",
"middle": [],
"last": "Crego",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3925--3937",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elise Michon, Josep Crego, and Jean Senellart. 2020. Integrating domain terminology into neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3925-3937, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1314--1324",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post and David Vilar. 2018. Fast lexically con- strained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1314-1324, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inconsistency in technical terminology: A problem for standardization in arabic",
"authors": [
{
"first": "A",
"middle": [],
"last": "Muhammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saraireh",
"suffix": ""
}
],
"year": 2001,
"venue": "Babel",
"volume": "47",
"issue": "1",
"pages": "10--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad A Saraireh. 2001. Inconsistency in tech- nical terminology: A problem for standardization in arabic. Babel, 47(1):10-21.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Code-switching for enhancing NMT with pre-specified translation",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "449--459",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, and Min Zhang. 2019. Code-switching for enhancing NMT with pre-specified translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 449-459, Minneapolis, Minnesota. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Industrial machine translation system for automotive domain",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Sukhareva",
"suffix": ""
},
{
"first": "Olgierd",
"middle": [],
"last": "Grodzki",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pflugfelder",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LREC2020 Industry Track",
"volume": "",
"issue": "",
"pages": "31--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Sukhareva, Olgierd Grodzki, and Bernhard Pflugfelder. 2020. Industrial machine translation system for automotive domain. In Proceedings of the LREC2020 Industry Track, pages 31-35, Mar- seille, France. European Language Resources Asso- ciation (ELRA).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Lexically constrained neural machine translation with Levenshtein transformer",
"authors": [
{
"first": "Raymond Hendy",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "Shamil",
"middle": [],
"last": "Chollampatt",
"suffix": ""
},
{
"first": "Liling",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3536--3543",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.325"
]
},
"num": null,
"urls": [],
"raw_text": "Raymond Hendy Susanto, Shamil Chollampatt, and Liling Tan. 2020. Lexically constrained neural machine translation with Levenshtein transformer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3536-3543, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Runtime comparison of(Post and Vilar, 2018) and multi-choice lexical constraints (MLC) as a function of wordform choices per constraints (average runtime per sentences with 2 constraint groups and similar sentence length) where k is beam size. src. The rest of the people will rest until the end of the year. tr. \u041e\u0441\u0442\u0430\u043b\u044c\u043d\u044b\u0435 \u043b\u044e\u0434\u0438 \u0431\u0443\u0434\u0443\u0442 \u043e\u0442\u0434\u044b\u0445\u0430\u0442\u044c \u0434\u043e \u043a\u043e\u043d\u0446\u0430 \u0433\u043e\u0434\u0430. ref. \u041e\u0441\u0442\u0430\u043b\u044c\u043d\u044b\u0435 \u043b\u044e\u0434\u0438 \u043e\u0442\u0434\u043e\u0445\u043d\u0443\u0442 \u0434\u043e \u043a\u043e\u043d\u0446\u0430 \u0433\u043e\u0434\u0430.",
"type_str": "figure",
"num": null
},
"TABREF1": {
"num": null,
"text": "",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"text": "Terminology entry constraint for the related source phrase, whereas all the most frequent options can be incorporated by our multi-choice lexical constraint approach. In order to extract Russian wordform candidates, we created a list of Russian wordforms most frequently aligned to a single inflected English wordform. As English is a morphologically poor language, we would end up with a list of Russian wordforms that would frequently contain five or more entries.",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"num": null,
"text": "An example of extracted wordform options depends on the inflections in the source language.",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"num": null,
"text": "An example sentence pair for terminology usage evaluation.",
"html": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF8": {
"num": null,
"text": "Terminology usage and BLEU scores of baseline, source factoring by append (SF), lexical constraints (LC) and multi-choice lexical constraints (MLC) (ours) models.",
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}