ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:11:07.170847Z"
},
"title": "Autocorrect in the Process of Translation -Multi-task Learning Improves Dialogue Machine Translation",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic translation of dialogue texts is a much needed demand in many real life scenarios. However, current neural machine translation systems usually deliver unsatisfying translation results of dialogue texts. In this paper, we conduct a deep analysis of a dialogue corpus and summarize three major issues on dialogue translation, including pronoun dropping (ProDrop), punctuation dropping (PunDrop), and typos (DialTypo). In response to these challenges, we propose a joint learning method to identify omission and typo in the process of translating, and utilize context to translate dialogue utterances. To properly evaluate the performance, we propose a manually annotated dataset with 1,931 Chinese-English parallel utterances from 300 dialogues as a benchmark testbed for dialogue translation. Our experiments show that the proposed method improves translation quality by 3.2 BLEU over the baselines. It also elevates the recovery rate of omitted pronouns from 26.09% to 47.16%. The code and dataset are publicly available at https://github.com/rgwt123/DialogueMT.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic translation of dialogue texts is a much needed demand in many real life scenarios. However, current neural machine translation systems usually deliver unsatisfying translation results of dialogue texts. In this paper, we conduct a deep analysis of a dialogue corpus and summarize three major issues on dialogue translation, including pronoun dropping (ProDrop), punctuation dropping (PunDrop), and typos (DialTypo). In response to these challenges, we propose a joint learning method to identify omission and typo in the process of translating, and utilize context to translate dialogue utterances. To properly evaluate the performance, we propose a manually annotated dataset with 1,931 Chinese-English parallel utterances from 300 dialogues as a benchmark testbed for dialogue translation. Our experiments show that the proposed method improves translation quality by 3.2 BLEU over the baselines. It also elevates the recovery rate of omitted pronouns from 26.09% to 47.16%. The code and dataset are publicly available at https://github.com/rgwt123/DialogueMT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Remarkable progress has been made in Neural Machine Translation (NMT) (Bahdanau et al., 2015; Wu et al., 2016; Lin et al., 2020; Liu et al., 2020) in recent years, which has been widely applied in everyday life. A typical scenario for such application is translating dialogue texts, in particular the record of group chats or movie subtitles, which helps people of different languages understand cross-language chat and improve their comprehension capabilities.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 94,
"end": 110,
"text": "Wu et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 111,
"end": 128,
"text": "Lin et al., 2020;",
"ref_id": "BIBREF14"
},
{
"start": 129,
"end": 146,
"text": "Liu et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, traditional NMT models translate texts in a sentence-by-sentence manner and focus on the formal text input, such as WMT news translation * Corresponding author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) (Barrault et al., 2020) , while the translation of dialogue must take the meaning of context and the input noise into account. Table 1 shows examples of dialogue fragment in Chinese and their translation in English. Example (1) demonstrates that the omission in traditional translation (e.g., dropped pronouns in Chinese) leads to inaccurate translation results.",
"cite_spans": [
{
"start": 4,
"end": 27,
"text": "(Barrault et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 131,
"end": 138,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nancy\u600e\u4e48\u4e86",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite its vast potential application, efforts of exploration into dialogue translation are far from enough. Existing works (Wang et al., 2016; Maruf et al., 2018) focus on either extracting dialogues from parallel corpora, such as OpenSubtitles (Lison et al., 2019) , or leveraging speaker information for integrating dialogue context into neural models. Also, the lack of both training data and benchmark test set makes current dialogue translation models far from satisfying and need to be further improved.",
"cite_spans": [
{
"start": 125,
"end": 144,
"text": "(Wang et al., 2016;",
"ref_id": "BIBREF26"
},
{
"start": 145,
"end": 164,
"text": "Maruf et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 247,
"end": 267,
"text": "(Lison et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we try to alleviate the aforementioned challenges in dialogue translation. We first analyze a fraction of a dialogue corpus and summarize three critical issues in dialogue translation, including ProDrop, PunDrop, and DialTypo. Then we design a Multi-Task Learning (MTLDIAL) approach that learns to self-correct sentences in the process of translating. The model's encoder part automatically learns how to de-noise the noise input via explicit supervisory signals provided by additional contextual labeling. We also propose three strong baselines for dialogue translation, including repair (REPAIRDIAL) and robust (ROBUSTDIAL) model. To alleviate the challenges arising from the scarcity of dialogue data, we use sub-documents in the bilingual parallel corpus to enable the model to learn from crosssentence context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Additionally as for evaluation, the most commonly used BLEU metric (Papineni et al., 2001) for NMT is not good enough to provide a deep look into the translation quality in such a scenario. Thus, we build a Chinese-English test set containing sentences with the issues in ProDrop, PunDrop and DialTypo, attached with the human translation and annotation. Finally, we get a test set of 300 dialogues with 1,931 parallel sentences.",
"cite_spans": [
{
"start": 67,
"end": 90,
"text": "(Papineni et al., 2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of this paper are as follows: a) We analyze three challenges ProDrop, PunDrop and DialTypo, which greatly impact the understanding and translation of a dialogue. b) We propose a contextual multi-task learning method to tackle the analyzed challenges. c) We create a Chinese-English test set specifically containing those problems and conduct experiments to evaluate proposed method on this test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There were already some manual analyses of translation errors, especially in the field of discourse translation. Voita et al. (2019) study English-Russian translation and find three main challenges for discourse translation: deixis, ellipsis, and lexical cohesion. For Chinese-English translation, tense consistency, connective mismatch, and content-heavy sentences are the most common issues (Li et al., 2014) .",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "Voita et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 393,
"end": 410,
"text": "(Li et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Dialogue Translation",
"sec_num": "2"
},
{
"text": "Different from previous works, we mainly analyze the specific phenomena in dialogue translation. We begin with a study on a bilingual dialogue corpus (Wang et al., 2018) . 1 We translate source sentences into the target language at sentence level and compare translation results with reference at dialogue level. Around 1,000 dialogues are evaluated, and the results are reported in Table 2 . From the statistic, we observe two persistent dialogue translation problems: pronoun dropping (ProDrop), punctuation drop- ping (PunDrop) . The phenomenon is consistent with the issue we collect in practical Instant Messaging (IM) chat scenarios, except for typos since the analyzed dialogue corpus has been proofread to remove typos.",
"cite_spans": [
{
"start": 150,
"end": 169,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 172,
"end": 173,
"text": "1",
"ref_id": null
},
{
"start": 521,
"end": 530,
"text": "(PunDrop)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis on Dialogue Translation",
"sec_num": "2"
},
{
"text": "Pronouns are frequently omitted in pro-drop languages (Huang, 1989) , such as Chinese, Japanese, Korean, Vietnamese, and Slavic languages. Such phenomenon are more frequent in dialogue, where the interlocutors are both aware of what's omitted in the context. However, when translating a pro-drop language into a non-pro-drop language (e.g., English) 2 , it is hard to translate those omitted pronouns, resulting in grammatical errors or semantic inaccuracies in the target language. The first conversation in Table 1 is an example.",
"cite_spans": [
{
"start": 54,
"end": 67,
"text": "(Huang, 1989)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 509,
"end": 516,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Pronoun Dropping",
"sec_num": "2.1"
},
{
"text": "In dialogue scenarios, such as IM software, punctuation is often omitted and users tend to segment sentences with spaces. The problem becomes much serious in languages with no spaces, such as Chinese, Japanese, Korean, and Thai. Table 1 shows this phenomenon in Example (2).",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Punctuation Dropping",
"sec_num": "2.2"
},
{
"text": "Typo repairing is another fundamental but very challenging practical problem. In dialogue translation, typos or misspellings are very common, which dramatically undermine the quality of translation output produced by machine translation. Table 1 shows this phenomenon in Example (3).",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dialogue Typos",
"sec_num": "2.3"
},
{
"text": "This section aims to propose a unified framework that facilitates NMT to correct noisy inputs in dialogue neural machine translation (NMTDIAL ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach to NMTDIAL",
"sec_num": "3"
},
{
"text": "The most challenging problem for NMTDIAL is the data distribution gap between training and inference stage, where the training data are clean sentence-level pairs while the test data are noisy dialogue-level conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "To bridge the distribution gap, the first step is to generate perturbation examples based on training instances. The data generation mainly consists of two steps. The first step is to obtain sub-documents with cross-sentence context, and the second step is to generate examples with word perturbations within sub-documents. Figure 1a shows a complete process. Cross-sentence Context It is difficult to acquire dialog-level parallel training data. As an alternative approach, we use parallel document data to catch dependencies across sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 333,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "Formally, let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "x d = {x (1) , x (2) , \u2022 \u2022 \u2022 , x (M ) } be a source-language document containing M source sentences. And y d = {y (1) , y (2) , \u2022 \u2022 \u2022 , y (M )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "} is the corresponding target-language document containing the same number of sentences as that of the source document. To get more context information, we randomly sample consecutive sub-document pairs (x d , y d ) of N sentences (i.e., snippet pairs from aligned documents). We set N \u2208 [1, 10] in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "We use a special token <sep> as the separator to concatenate sentences into a parallel subdocument For ProDrop and PunDrop, we traverse source sentences of x d , discard pronouns/punctuation in these sentences with a probability of 30% and record deletion positions with corresponding labels (see details below); to construct a typo, we choose a word with a probability of 1%, of which 80% is replaced with one of its homophones according to T DialTypo and 20% is replaced with another random word. We determine these percentages by observing the generated perturbation data. For annotation labels, we tag correct words with 0, words of DialTypo with 1, ProDrop words with 2 and PunDrop words with 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "{(x d , y d )},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "Finally we get",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "x d , x d and their corresponding label sequences x , x .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "x is a sequence of all 0s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Perturbation Example Generation",
"sec_num": "3.1"
},
{
"text": "With the created training data, we first introduce two methods for NMTDIAL as our strong baselines, which will be elaborated here for model comparison. REPAIRDIAL A natural way for NMTDIAL is to train a dialog repair model to transform dialogue inputs into forms that an ordinary NMT system can deal with. REPAIRDIAL involves training a repair model to transform x d to x d and a clean translation model that translates x d to y d . As a pipeline method, REPAIRDIAL may suffer from error propagation. ROBUSTDIAL We extend the robust NMT (Cheng et al., 2018) to dialogue-level translation. Specifically, we take both the original (x d , y d ) and the perturbated (x d , y d ) bilingual pairs as training instances. So the model is more resilient on dialogue translation. During the inference stage, the robust model directly translates raw inputs into the target language.",
"cite_spans": [
{
"start": 537,
"end": 557,
"text": "(Cheng et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMTDIAL Base Models",
"sec_num": "3.2"
},
{
"text": "ROBUSTDIAL has the potential to handle translation problems caused by noisy dialogue inputs. However, the internal mechanism is rather implicit and in a black box. Therefore, the improvement is limited, and it is not easy to analyze the improvement. To address this issue, we introduce a contextaware multi-task learning method MTLDIAL for NMTDIAL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "As shown in \u2462 of Figure 1b , the only difference is that we have a contextual labeling module based on the encoder. We denote the final layer output of the Transformer encoder as H. For each token h i in H = (h 1 , h 2 , ..., h m ), the probability of contextual labeling is defined as:",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 26,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "P (p i = j|X) = sof tmax(W \u2022 h i + b)[j] (1) where X = (x 1 , x 2 , ..., x m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "is the input sequence, P (p i = j|X) is the conditional probability that token x i is labeled as j (j \u2208 0, 1, 2, 3 as defined above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "Here we make the labeling module as simple as possible, so that the Transformer encoder can behave like BERT (Devlin et al., 2019) , learning more information related to perturbation and guiding the decoder to find desirable translations.",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "During the training phrase, the model takes (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "x d , x d , x , x , y d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "as the training data. The learning process is driven by optimizing two objectives, corresponding to sequence labeling as auxiliary loss (L SL ) and machine translation as the primary loss (L M T ) in a multi-task learning framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "L SL = \u2212log(P ( x |x d ) + P ( x |x d )) (2) L M T = \u2212log(P (y d |x d ) + P (y d |x d )) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "The two objective are linearly combined as the overall objective in learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = L M T + \u03bb \u2022 L SL",
"eq_num": "(4)"
}
],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "\u03bb is coefficient. During experiments, we set as follows according the best practice:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb = max(1.0 \u2212 update_num 10 5 , 0.2)",
"eq_num": "(5)"
}
],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "where update_num is the number of updating steps during training. We introduce multi-task learning for two reasons: 1) The labeling performance reflects the model's understanding of sentences containing the mentioned phenomena. 2) Contextual Labeling can be seen as a pre-training process based on the BERTlike model, and explicit guidance can enable the encoder to learn more about the information we annotate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MTLDIAL",
"sec_num": "3.3"
},
{
"text": "The modes for exploring dialogue context during decoding can be divided into offline and online. For the offline setting, all sentences in a dialogue are concatenated one by one with <sep>. The concatenated sequence is translated, and the target translation for each sentence can be easily detected according to the separator <sep>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dialogue Context",
"sec_num": "3.4"
},
{
"text": "The offline mode can be used for dialogue translation where the entire source dialogue has already been available before translation (e.g., movie subtitles). However, we continuously get new source sentences for online chat and need to generate corresponding translations immediately. We refer to this mode as the online setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dialogue Context",
"sec_num": "3.4"
},
{
"text": "We experiment with two online methods. One is online-cut where the current sentence is concatenated to the previous context with the separator <sep>. The trained NMTDIAL model then translates the concatenated sequence and the last target segment is used as the translation for the current source sentence. The other is online-fd. Online-fd is a force decoding method. It forces the decoder to use translated history and continues decoding instead of re-translating the entire concatenated sequence. Online-fd brings more consistent translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling Dialogue Context",
"sec_num": "3.4"
},
{
"text": "For better evaluation of NMTDIAL, we create a Chinese-English test set covering all issues discussed above based on the corpus we analyze in the second section. Statistics on the built test set are displayed in Table 3 . Building such a test set is hard and time-consuming as we need to perform manual selection, translation and annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 211,
"end": 218,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Test Set",
"sec_num": "4.1"
},
{
"text": "As for translation quality evaluation, we use other metrics in addition to BLEU. For PunDrop and DialTypo, we evaluate BLEU scores on sen- tences containing missing punctuation or typos according to the annotation information. As for ProDrop, we evaluate the translation quality by the percentage of correctly recovering and translating the dropped pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set",
"sec_num": "4.1"
},
{
"text": "We adopt the Chinese-English corpus from WMT2020 3 , with about 48M sentence pairs, as our bilingual training data D. We select newstest2019 as the development set. After splicing, we get D doc with 1.2M pairs and corresponding perturbated dataset D and D doc with 48M and 1.2M pairs respectively. We use byte pair encoding compression algorithm (BPE) (Sennrich et al., 2016) to process all these data and limit the number of merge operations to a maximum of 30K. In our studies, all translation models are Transformer-big, including 6 layers for both encoders and decoders, 1024 dimensions for model, 4096 dimensions for FFN layers and 16 heads for attention.",
"cite_spans": [
{
"start": 352,
"end": 375,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "During training, we use label smoothing = 0.1 (Szegedy et al., 2016) , attention dropout = 0.1 and dropout (Hinton et al., 2012) with a rate of 0.3 for all other layers. We use Adam (Kingma and Ba, 2015) to train the NMT models. \u03b21 and \u03b22 of Adam are set to 0.9 and 0.98, the learning rate is set to 0.0005, and gradient norm 5. The models are trained with a batch size of 32,000 tokens on 8 Tesla V100 GPUs during training. During decoding, we employ beam search algorithm and set the beam size to 5. We use sacrebleu (Post, 2018) to calculate uncased BLEU-4 (Papineni et al., 2001 ).",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 107,
"end": 128,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 519,
"end": 531,
"text": "(Post, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 560,
"end": 582,
"text": "(Papineni et al., 2001",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.2"
},
{
"text": "The offline mode aims at using the entire source dialogue for translation. We experiment with all the methods in the offline setting, and the results are shown in Table 4 . BASE is a Transformer-big model trained with D and D doc . GOLD+BASE represents the oracle result on this test set. We can see that MTLDIAL has achieved the best results, reducing the gap between test wrong and test gold from 4.1 to 0.9. Compared with ROBUSTDIAL and MTLDIAL, REPAIRDIAL performs relatively poorly. We believe that this is due to the error propagation caused by the pipeline. From the specific indicators, we can draw the following conclusions: 1) DialTypo has a very obvious impact on BLEU, and the gap between BASE and GOLD+BASE is more than 12 points;",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results of Offline Setting",
"sec_num": "4.3"
},
{
"text": "2) The recovery of ProDrop is a relatively difficult task. Although compared with BASE, the current best result of 47.16% has been greatly improved, but is still far away from the golden result 97.32%; 3) PunDrop seems to be a relatively easy task for each method to address.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of Offline Setting",
"sec_num": "4.3"
},
{
"text": "The online mode only makes use of previous context during translation. An extreme situation of online setting is that there is no context, that is, sentence-level translation. We show the results of all the methods on the test set at the sentence level in Table 5 . Despite the lack of context, our approaches can still bring general benefits. We find that ProDrop relies heavily on context, especially for MTLDIAL, where the absence of context results in a 12.38% drop in performance. This is in line with our expectations, as in many cases machine translation system heavily depends on context to fulfill the dropped pronouns. We further experiment on how context lengths can affect NMTDIAL. The results are shown in Figure 2. In the online-cut setting, we can see that using previous few sentences as context may improve overall BLEU score, but continuously adding more preceding texts will lead to a continuous decline. Online-fd performs well because using historical translation records to continue decoding can bring more consistent translation results. For the recovery accuracy of ProDrop, online-cut is better than online-fd in contrast, because forced decoding may cause wrong pronoun transmission.",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 5",
"ref_id": "TABREF10"
},
{
"start": 719,
"end": 725,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results of Online Setting",
"sec_num": "4.4"
},
{
"text": "To better understand how our proposed MTLDIAL make sense, we calculate the labeling performance on both validation and test set. Table 6 shows the overall performance. The validation set follows the same processing progress of training data, while the test set is the real dialogue data set built manually.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 6",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Labeling Performance",
"sec_num": "5.1"
},
{
"text": "The proposed model obtains 54.3% F1 score on the validation set for ProDrop, 70.9% for Figure 3 : ProDrop recovery performance of BASE and contextual MTLDIAL. Total means the total number of occurrence of corresponding pronouns in the test set. We ignore pronouns with a total occurrence number less than 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeling Performance",
"sec_num": "5.1"
},
{
"text": "PunDrop, and 73.2% for DialTypo. When testing on the real test data, the performance on ProDrop has declined a lot because of the difference between synthetic training/validation data and real test data. Especially noteworthy is the fact that F1 score of DialTypo drops the most, reaching 26%, because of its low recall. It may be due to the considerable difference between the typos generated by our automatic method and the actual distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Performance",
"sec_num": "5.1"
},
{
"text": "We further explore the auto-correction of specific pronouns. As shown in Figure 3 , we can find that pronouns such as I/you, which occur mostly in the corpus, generally have a higher recovery success rate. We believe this is due to the data imbalance. Compared with BASE, MTLDIAL has a much better performance. While ProDrop recovery accuracy has been improved, it still has not achieved 50%. The most common error is that the model does not capture any context or captures previous inappropriate context. We summarize frequentlyoccurring recovery errors in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 3",
"ref_id": null
},
{
"start": 558,
"end": 565,
"text": "Table 7",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Effects of Pronoun Correcting",
"sec_num": "5.2"
},
{
"text": "Our work is related with both dialogue translation and robust training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "There has been some work on building bilingual dialogue data sets for the translation task in recent years. Wang et al. (2016) propose a novel approach to automatically construct parallel discourse corpus for dialogue machine translation and release around 100K parallel discourse data with manual speaker and dialogue boundary annotation. Maruf et al. (2018) propose the task of translating Bilingual Multi-Speaker Conversations. They introduce datasets extracted from Europarl and Opensubtitles and explore how to exploit both source and targetside conversation histories. Bawden et al. (2019) present a new English-French test set for evaluating of Machine Translation (MT) for informal, written bilingual dialogue. Recently WMT2020 has also proposed a new shared task -machine translation for chats, 4 focusing on bilingual customer support chats (Farajian et al., 2020) .",
"cite_spans": [
{
"start": 108,
"end": 126,
"text": "Wang et al. (2016)",
"ref_id": "BIBREF26"
},
{
"start": 340,
"end": 359,
"text": "Maruf et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 575,
"end": 595,
"text": "Bawden et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 851,
"end": 874,
"text": "(Farajian et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialogue Translation",
"sec_num": null
},
{
"text": "Neural models have been usually affected by noisy issues. Many efforts (Li et al., 2017; Sperber et al., 2017; Vaibhav et al., 2019; Yang et al., 2020) focus on data augmentation to alleviate the problem by adding synthetic noise to the training set. However, generating noise has always been a challenge, as natural noise is always more diversified than artificially constructed noise (Belinkov and Bisk, 2018; Anastasopoulos, 2019; .",
"cite_spans": [
{
"start": 71,
"end": 88,
"text": "(Li et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 89,
"end": 110,
"text": "Sperber et al., 2017;",
"ref_id": "BIBREF21"
},
{
"start": 111,
"end": 132,
"text": "Vaibhav et al., 2019;",
"ref_id": "BIBREF23"
},
{
"start": 133,
"end": 151,
"text": "Yang et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 386,
"end": 411,
"text": "(Belinkov and Bisk, 2018;",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 433,
"text": "Anastasopoulos, 2019;",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robust Training",
"sec_num": null
},
{
"text": "In this paper, we manually analyze challenges in dialogue translation and detect three main problems. In order to tackle these issues, we propose a multitask learning method with contextual labeling. For deep evaluation, we construct dialogues with translation and detailed annotations as a benchmark test set. Our proposed model achieves substantial improvements over the baselines. What is more, we further analyze the performance of contextual labeling and pronoun recovery errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "https://en.wikipedia.org/wiki/Pro-drop_language",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This corpus includes News Commentary, Wiki Titles, UN Parallel Corpus, CCMT Corpus, WikiMatrix and Backtranslated news.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/wmt20/chat-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the bilingual speakers for test set construction, and the anonymous reviewers for suggestions. Deyi Xiong is partially supported by the Natural Science Foundation of Tianjin (Grant No. 19JCZDJC31400) and the Royal Society (London) (NAF\\R1\\180122).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An analysis of sourceside grammatical errors in nmt",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "213--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos. 2019. An analysis of source- side grammatical errors in nmt. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 213-223.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation of text from non-native speakers",
"authors": [
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonios Anastasopoulos, Alison Lui, Toan Q Nguyen, and David Chiang. 2019. Neural machine transla- tion of text from non-native speakers. In NAACL- HLT (1).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2020 conference on machine translation (wmt20)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Biesialska",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "1--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, et al. 2020. Find- ings of the 2020 conference on machine translation (wmt20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Diabla: A corpus of bilingual spontaneous written dialogues for machine translation",
"authors": [
{
"first": "Rachel",
"middle": [],
"last": "Bawden",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Rosset",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Bilinski",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.13354"
]
},
"num": null,
"urls": [],
"raw_text": "Rachel Bawden, Sophie Rosset, Thomas Lavergne, and Eric Bilinski. 2019. Diabla: A corpus of bilingual spontaneous written dialogues for machine transla- tion. arXiv preprint arXiv:1905.13354.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Synthetic and natural noise both break neural machine translation",
"authors": [
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bisk",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards robust neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1756--1766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1756- 1766. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT (1).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Andr\u00e9 FT Martins, Sameen Maruf, and Gholamreza Haffari. 2020. Findings of the wmt 2020 shared task on chat translation",
"authors": [
{
"first": "",
"middle": [],
"last": "M Amin Farajian",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ant\u00f3nio",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lopes",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "65--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Amin Farajian, Ant\u00f3nio V Lopes, Andr\u00e9 FT Mar- tins, Sameen Maruf, and Gholamreza Haffari. 2020. Findings of the wmt 2020 shared task on chat trans- lation. In Proceedings of the Fifth Conference on Machine Translation, pages 65-75.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pro-drop in chinese: A generalized control theory",
"authors": [
{
"first": "James",
"middle": [],
"last": "Ct",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1989,
"venue": "The null subject parameter",
"volume": "",
"issue": "",
"pages": "185--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CT James Huang. 1989. Pro-drop in chinese: A gener- alized control theory. In The null subject parameter, pages 185-214. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Assessing the discourse factors that influence the quality of machine translation",
"authors": [
{
"first": "Jessy",
"middle": [],
"last": "Junyi",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "283--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyi Jessy Li, Marine Carpuat, and Ani Nenkova. 2014. Assessing the discourse factors that influ- ence the quality of machine translation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 283-288.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Robust training under linguistic adversity",
"authors": [
{
"first": "Yitong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yitong Li, Trevor Cohn, and Timothy Baldwin. 2017. Robust training under linguistic adversity. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, pages 21-27.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pretraining multilingual neural machine translation by leveraging alignment information",
"authors": [
{
"first": "Zehui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Jiangtao",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.03142"
]
},
"num": null,
"urls": [],
"raw_text": "Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020. Pre- training multilingual neural machine translation by leveraging alignment information. arXiv preprint arXiv:2010.03142.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Milen",
"middle": [],
"last": "Kouylekov",
"suffix": ""
}
],
"year": 2019,
"venue": "LREC 2018, Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison, J\u00f6rg Tiedemann, Milen Kouylekov, et al. 2019. Open subtitles 2018: Statistical rescoring of sentence alignments in large, noisy parallel corpora. In LREC 2018, Eleventh International Conference on Language Resources and Evaluation. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Contextual neural model for translating bilingual multi-speaker conversations",
"authors": [
{
"first": "Sameen",
"middle": [],
"last": "Maruf",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameen Maruf, Andr\u00e9 FT Martins, and Gholamreza Haffari. 2018. Contextual neural model for trans- lating bilingual multi-speaker conversations. WMT 2018, page 101.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2001,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: a method for automatic eval- uation of machine translation. In ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), volume 1, pages 1715-1725.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Toward robust neural machine translation for noisy input sequences",
"authors": [
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2017,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In International Workshop on Spo- ken Language Translation (IWSLT).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 2818-2826.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving robustness of machine translation with synthetic noise",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Vaibhav",
"suffix": ""
},
{
"first": "Sumeet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1916--1920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, and Graham Neubig. 2019. Improving robustness of ma- chine translation with synthetic noise. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1916-1920.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1198--1212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198-1212.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Translating pro-drop languages with reconstruction models",
"authors": [
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Thirty-Second AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018. Trans- lating pro-drop languages with reconstruction mod- els. In Thirty-Second AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatic construction of discourse corpora for dialogue translation",
"authors": [
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Andy Way, and Qun Liu. 2016. Automatic construction of discourse corpora for dialogue translation.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Google's neural machine translation system",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "Bridging the gap between human and machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Towards making the most of bert in neural machine translation",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "34",
"issue": "",
"pages": "9378--9385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, and Lei Li. 2020. Towards making the most of bert in neural machine translation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9378- 9385.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Overall diagram of NMTDIAL. (a) demonstrates the process of data generation, and (b) displays the three proposed methods. \u2460/\u2461/\u2462 represent REPAIRDIAL, ROBUSTDIAL and MTLDIAL respectively.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Overall BLEU and ProDrop recovery performance (Accuracy) of MTLDIAL with different context length. Dash lines are the offline results.",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "Examples of ProDrop (1), PunDrop (2) and DialTypo (3). MT is translation results from Google Translate while REF is references.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "Manual evaluation of dialogue samples.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF4": {
"text": "Nancy \u600e\u4e48 \u4e86 \uff1f<sep> \u5979 \u662f\u4e0d\u662f \u54ed \u4e86 \u554a <eos> Nancy \u600e\u4e48 \u4e86 <sep> \u662f\u4e0d\u662f \u54ed \u4e86 \u963f <eos>",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td>Encoder</td><td>Decoder</td></tr><tr><td colspan=\"2\">Nancy \u600e\u4e48 \u4e86 \uff1f</td><td/><td colspan=\"3\">\u5979 \u662f\u4e0d\u662f \u54ed \u4e86 \u554a</td><td>\u2460</td><td>Decoder</td><td>Encoder</td></tr><tr><td/><td>PunDrop</td><td/><td colspan=\"2\">ProDrop</td><td>DialTypo</td><td>\u2461</td><td>Encoder</td><td>Decoder</td></tr><tr><td>0</td><td>0 0</td><td>3</td><td>2</td><td colspan=\"2\">0 0 1 0</td><td/><td>Encoder</td><td>Decoder</td></tr><tr><td colspan=\"6\">What happened to Nancy ? &lt;sep&gt; Did she cry ? &lt;eos&gt;</td><td>\u2462</td></tr><tr><td colspan=\"3\">What happened to Nancy ?</td><td/><td/><td>Did she cry ?</td><td/></tr><tr><td/><td/><td>(a)</td><td/><td/><td/><td/><td>(b)</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "as shown in Figure 1a. Contextual Perturbation We then consider generating perturbation example x d from x d with respect to sub-document context. For ProDrop, PunDrop and DialTypo, we build a Chinese pronoun table T ProDrop , a common punctuation table T PunDrop and a Chinese homophone table T DialTypo respectively.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF7": {
"text": "Statistics on the test set. \"/\" denote numbers in Chinese and English separately.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF9": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">: Experiment results on our constructed di-</td></tr><tr><td colspan=\"4\">alogue translation test set in offline setting. The</td></tr><tr><td colspan=\"4\">GOLD+BASE represents translations of completely</td></tr><tr><td colspan=\"4\">correct inputs (without ProDrop, PunDrop or</td></tr><tr><td colspan=\"4\">DialTypo) using BASE model, which is used to</td></tr><tr><td colspan=\"4\">show the oracle results with Transformer on the test</td></tr><tr><td>set.</td><td/><td/><td/></tr><tr><td>Methods</td><td>Overall BLEU</td><td>ProDrop</td><td>Details PunDrop DialTypo</td></tr><tr><td>BASE</td><td colspan=\"2\">32.8(+0.1) 19.06%(-7.03%)</td><td>28.1(-0.1) 22.3(-1.7)</td></tr><tr><td>REPAIRDIAL</td><td colspan=\"2\">33.8(-0.2) 24.75%(-5.02%)</td><td>32.0(+0.8) 28.3(+0.9)</td></tr><tr><td colspan=\"4\">ROBUSTDIAL 34.2(+0.1) 36.79%(-8.69%) 32.7(-0.3) 28.9(-0.5)</td></tr><tr><td>MTLDIAL</td><td colspan=\"3\">35.3(-0.6) 34.78%(-12.38%) 34.3(-0.0) 28.6(-0.1)</td></tr><tr><td colspan=\"4\">GOLD+BASE 37.1(+0.3) 96.66%(-0.66%) 35.3(+0.7) 35.9(-0.9)</td></tr></table>",
"html": null
},
"TABREF10": {
"text": "Results on our constructed dialogue translation test set in online setting at the sentence level.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF12": {
"text": "Labeling performance on the validation/test set.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF13": {
"text": "She is no longer in my law firm. What, why are you/is she going? I open my/She opens her own law firm.(2) \u743c\u65af (\u6211/I)\u95ee\u4f60\u4ef6\u4e8b Jones asked/, I want to ask you something.He helped me out in private last time. I/He nearly lost my/his job.",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>zh</td><td>en</td></tr><tr><td/><td>\u827e\u4e3d\u6700\u8fd1\u600e\u4e48\u6837\u4e86</td><td>What's going on with Ellie?</td></tr><tr><td>(1)</td><td>\u5979\u5df2\u7ecf\u4e0d\u5728\u6211\u7684\u5f8b\u6240\u4e86 \u4ec0\u4e48 (\u5979/she)\u4e3a\u4ec0\u4e48\u8d70\u4e86</td></tr><tr><td/><td>(\u5979/she)\u5f00\u4e86\u81ea\u5df1\u7684\u5f8b\u6240</td></tr><tr><td>(3)</td><td>\u4ed6\u4e0a\u6b21\u5e2e\u6211\u79c1\u4e0b\u641e (\u4ed6/He)\u5dee\u70b9\u5de5\u4f5c\u90fd\u4e22\u4e86</td></tr></table>",
"html": null
},
"TABREF14": {
"text": "Examples of ProDrop recovery errors.",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td>80</td><td/><td/><td/><td/><td/><td>total DIALMTL BASE</td></tr><tr><td/><td>60</td><td/><td/><td/><td/><td/></tr><tr><td>number</td><td>40</td><td/><td/><td/><td/><td/></tr><tr><td/><td>20</td><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>i</td><td>you</td><td>he</td><td>she</td><td>we</td><td>they</td></tr></table>",
"html": null
}
}
}
}