ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:34:05.014539Z"
},
"title": "Goku's Participation in WAT 2020",
"authors": [
{
"first": "Dongzhe",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Rakuten, Inc",
"location": {}
},
"email": ""
},
{
"first": "Ohnmar",
"middle": [],
"last": "Htun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Rakuten, Inc",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper introduces our neural machine translation systems' participation in the WAT 2020 (team ID: goku20). We participated in the (i) Patent, (ii) Business Scene Dialogue (BSD) document-level translation, (iii) Mixeddomain tasks. Regardless of simplicity, standard Transformer models have been proven to be very effective in many machine translation systems. Recently, some advanced pretraining generative models have been proposed on the basis of encoder-decoder framework. Our main focus of this work is to explore how robust Transformer models perform in translation from sentence-level to document-level, from resource-rich to low-resource languages. Additionally, we also investigated the improvement that fine-tuning on the top of pre-trained transformer-based models can achieve on various tasks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper introduces our neural machine translation systems' participation in the WAT 2020 (team ID: goku20). We participated in the (i) Patent, (ii) Business Scene Dialogue (BSD) document-level translation, (iii) Mixeddomain tasks. Regardless of simplicity, standard Transformer models have been proven to be very effective in many machine translation systems. Recently, some advanced pretraining generative models have been proposed on the basis of encoder-decoder framework. Our main focus of this work is to explore how robust Transformer models perform in translation from sentence-level to document-level, from resource-rich to low-resource languages. Additionally, we also investigated the improvement that fine-tuning on the top of pre-trained transformer-based models can achieve on various tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper introduces our neural machine translation (NMT) systems' participation in the 7th Workshop on Asian Translation (WAT-2020) shared translation task (Nakazawa et al., 2020) . We participated in the (i) JPO Patent, (ii) Documentlevel Business Scene Dialogue (BSD) translation, and (iii) Mixed-domain tasks. In particular, the document-level translation tasks are newly introduced for WAT 2020 as traditional translation tasks such as ASPEC usually focus on sentence-level translation, whose quality tends to saturation.",
"cite_spans": [
{
"start": 158,
"end": 181,
"text": "(Nakazawa et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We built our NMT systems based on the standard Transformer (Vaswani et al., 2017) for the JPO Patent and Mixed-domain tasks. In addition to standard Transformer, a pre-training auto-encoder model mBART has been explored in the JPO patent task. In terms of the documentlevel translation task, we evaluated on the BSD corpus using the hierarchical Transformer mod-els (Miculicich et al., 2018 ) and compared the results with our fine-tuned mBART models, which were initially built to deal with the document-level translation as a downstream task.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 366,
"end": 390,
"text": "(Miculicich et al., 2018",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The NMT systems for the JPO patent task have been trained in a constrained manner, which means no other resources were used except training corpus provided by the shared task organizers, and achieved remarkable performance. On the other hand, we leveraged other data resources when only limited number of data provided for model training. For instance, we included the Japanese-English Subtitle Corpus (JESC) (Pryzant et al., 2018) and Myth Corpus (Susanto et al., 2019) as auxiliary training data for the document-level and mixeddomain translation tasks, respectively. Our main findings for each task are summarized in the following:",
"cite_spans": [
{
"start": 409,
"end": 431,
"text": "(Pryzant et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 448,
"end": 470,
"text": "(Susanto et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Patent task: We built several Transformerbased systems with and without pre-training approach and compared the performance for the sentence-level translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Document-level translation task: We applied two document-level NMT systems and found that the mBART model pre-trained on the large-scale corpora greatly outperformed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Mixed-domain task: We designed constrastive experiments with different data combinations for Myanmar\u2194English translation, and validated the effectiveness of data augmentation for low-resource translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the patent translation task, we conducted the experiments on the JPO Patent Corpus (JPC) version 4.3 that is constructed by the Japan Patent Office (JPO). Same as the previous tasks in WAT 2019 (Nakazawa et al., 2019) , it consists of patent description translation sub-tasks for Chinese-Japanese, Korean-Japanese, and English-Japanese. Each language pair's training set contains 1M parallel sentences individually, which cover four patent sections: Chemistry, Electricity, Mechanical engineering, and Physics, based on International Patent Classification (IPC). Using the official training, develop, and test split provided by the organizer without other resources, we trained individual unidirectional Transformer models for each language pair. In addition, pre-training approach for sentencelevel translation has been explored in this task.",
"cite_spans": [
{
"start": 197,
"end": 220,
"text": "(Nakazawa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2.1"
},
{
"text": "As the baseline NMT systems data preparation suggested 1 , we pre-tokenized the data with the following tools: Juman version 7.01 2 for Japanese; Stanford Word Segmenter version 4.0.0 3 for Chinese; Mecab-ko 4 for Korean, and Moses tokenizer for English. For the byte-pair encoding (BPE)-based Senten-cePiece model (Kudo and Richardson, 2018) training, we set the vocabulary size to 100,000 and threshold of occurrence to 10 times for subword units (Sennrich et al., 2016) removal from the vocabulary, following same data preparation by BPE for the baseline NMT system released by the organizer 5 . Moreover, we merged the source and target sentences and trained a joint vocabulary for the NMT systems. For the text input to mBART fine-tuning, we used the same 250,000 vocabulary as in the pre-trained mBART model across the 25 languages, which was also tokenized with a Sen-tencePiece model based on BPE method. Note that the aforementioned pre-tokenization was not applicable to the fine-tuning approach.",
"cite_spans": [
{
"start": 315,
"end": 342,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 449,
"end": 472,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "2.2"
},
{
"text": "Firstly, we built models based on the standard Transformer (Vaswani et al., 2017) with the implementation in the Fairseq toolkit (Ott et al., 2019 Intuitively, we tied the input embedding layers of encoder and decoder together with the decoder output embedding layers (Press and Wolf, 2017) for the tokenized input as well as the detokenized output. As a result, a large amount of parameters were automatically saved without depressing the performance. The model was optimized with Adam (Kingma and Ba, 2015) using \u03b21 = 0.9, \u03b22 = 0.98, and = 1e \u22128 . Same as (Susanto et al., 2019) , we used the learning rate schedule of 0.001 and maximum 4000 tokens in a batch, where the parameters were updated after every 2 epochs.",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 129,
"end": 146,
"text": "(Ott et al., 2019",
"ref_id": "BIBREF12"
},
{
"start": 268,
"end": 290,
"text": "(Press and Wolf, 2017)",
"ref_id": "BIBREF13"
},
{
"start": 558,
"end": 580,
"text": "(Susanto et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "Secondly, we fine-tuned on the JPO patent corpus using the mBART auto-encoder model , which has been pre-trained on largescale monolingual CommonCrawl (CC) corpus in 25 languages using the BART objective . Specifically, we used the mBART models in a teacher-forcing manner, where the pretrained mBART weights 6 (\u223c 680M parameters) were loaded. Then, our student models were utterly built upon the bi-text data, which fed the source language and target language into the pre-trained encoder and decoder for fine-tuning. We experimented our mBART and standard Transformer with the hyper-parameters summarized in Table 1 on 4 Nvidia V100 GPUs.",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 617,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "Finally, the best performing models on the validation sets was selected and applied for decoding the test sets. Furthermore, we trained three independent models with different random seeds in order to perform ensemble decoding. Table 2 : JPO task results. \"XFMR\" is short for Transformer and HUMAN refers to the final results provided by the task organizers. Readers may refer to the task overview for the detailed breakdown for each test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "2.3"
},
{
"text": "As shown in Table 2 , our model performance for the patent task has been split into four parts for standard Transformer and mBART approaches, with respect to the single and ensemble models. Note that only the results of the test-N 7 set and the Expression Pattern task (JPCEP) for were reported in the table for brevity. Here, we present the results based on the automatic metrics scores, as well as the human evaluation results 8 . In general, the Transformers' single model decoding results lagged behind that of the ensemble decoding in all directions. Without using any other resources, our best submissions of Transformer models obtain the first place on the WAT leaderboard 9 for ja-zh, and ja-en.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "2.4"
},
{
"text": "In terms of the fine-tuning results, we observed that the mBART single models outperformed the Transformer single models in 5 out of 7 language pairs, where the maximum margins can reach as much as 1.3 BLEU points (i.e., ja-zh and ja-ko). However, the ensemble model decoding of the mBART models could hardly boost the gains as we expected, which indicates that the advantages of Transformer-based pre-training approach can not be reflected in the JPO patent tasks when the training data size is sufficient (e.g., 1M).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "2.4"
},
{
"text": "3 Document-Level Translation Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "2.4"
},
{
"text": "In this year, WAT workshop introduced a new document-level translation task with sub-tasks from the perspective of two different domains: scientific paper and business conversation. In particular, we participate in the business conversation sub-task in WAT 2020. We followed the instruction of the shared-task organizer, using the Business Scene Dialogue (BSD) corpus for the dataset including training, development and test data. The BSD corpus consist of 20,000 training, 2,051 development and 2,120 test sentences from 670, 69, 69 documents, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3.1"
},
{
"text": "Considering the limited document-level parallel data (<1k) in BSD training and development sets, we supposed that auxiliary document-level resources would be necessarily important. Therefore, we performed constrastive experiments with and without additional resources for this task. In particular, we appended the Japanese-English Subtitle Corpus (JESC) training set to the original BSD corpus, which brings in about 2.8M ja\u2194en sentences. We trained a context-aware hierarchical attention network (HAN) from scratch and fine-tuned on the BSD corpus using the mBART models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3.1"
},
{
"text": "For the document-level NMT tasks, we utilized the contextual information of 3 sentences instead of the entire documents in the dataset for both the HAN and mBART models. Similar to the data preprocessing illustrated in Section 2.2, we ran the Juman analyzer to segment the Japanese characters but did nothing on the English documents for the HAN models. After pre-tokenization, we fed the Japanese and English documents into separate SentencePiece models (SPM) to train BPE subword units. The subword vocabulary size is 32,000 with 100% character coverage. On the other hand, we tokenized for the fine-tuning model with the pre-trained mBART multilingual vocabulary with 250,000 subword tokens. None of additional preprocessing was required in this implementation. For both two experimental settings, all empty lines and sentences exceeding 512 subword tokens have been removed from the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "3.2"
},
{
"text": "Firstly, we explored the context-aware based HAN models on the BSD corpus with the OpenNMT toolkit (Klein et al., 2017) , where the document context of 3 previous sentences were integrated for global context encoding and decoding of the source and target languages, respectively. Intuitively, we trained the HANbase+ models as baselines, which were essentially sentence-level Transformer-based models. Then, a multi-encoder and multi-decoder Transformer were learned based on sentence-level models. Finally, we built HANjoint+ models upon the multi-encoder and multi-decoder models. Besides the HAN models, we fine-tuned on the BSD corpus using the mBART auto-encoder pretrained model via the Fairseq toolkit, as mentioned in Section 2.3. Since the pre-traind mBART model initially can handle more than one sentences, it owns very good compatibility of the document- level machine translation tasks. In this case, we considered the tri-sentence segments 10 as documents of the training sets, and fed them into the pretrained model to learn dependencies between sentences. We trained the HANjoint+ and mBART models on 4 V100 GPUs, whose model parameters have been shown in Table 3 .",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1172,
"end": 1179,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3.3"
},
{
"text": "We show the best BLEU scores that the HAN and mBART models can achieve in Table 4 . Under single model decoding, we observed that the mBARTdoc+ models could lead far ahead the HANjoint+ models by 5.7 and 4.3 BLEU scores in the BSD en-ja and ja-en tasks, respectively. It indicates that the advantages of pre-trainining are substantial in the BSD translation tasks. Moreover, our best submissions of the mBARTdoc+ models with ensemble model decoding achieved the first place on the WAT leaderboard in human evaluation scores for both directions. To investigate how important the document-level translation is and how much gains can be achieved by using other resources, we performed the ablative studies upon several mBART settings, where the results are shown in Table 5 . On one hand, HANbase+ sentence-level models performed worst among all the listed models. However, mBARTsen models incredibly outperformed the baselines due to the pre-training manner, even without additional resources. On the other hand, we observed that the mBARTdoc could hardly overwhelm the mBARTsen until additional JESC corpus was leveraged, where over 1 BLEU gains were obtained for both directions. Furthermore, we found that the mBARTsen+ and mBARTdoc+ models have achieved remarkable improvements by adding the Table 5 : Ablative study on the mBART in the BSD task. \"sen\" means using the mBART pre-training for the sentence-level translation evaluation, and the BLEU score of it calculated on the concatenation of all translated sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 81,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 763,
"end": 770,
"text": "Table 5",
"ref_id": null
},
{
"start": 1294,
"end": 1301,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "JESC corpus for training, which explicitly reflects that data hungry effect of the BSD corpus remains a challenge. Some examples whose translation quality was improved by considering context in BSD tasks have been illustrated in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 236,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "4 Mixed-domain Task",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.4"
},
{
"text": "Despite the Myanmar-English mixed-domain tasks were excluded in the final evaluation this year, our experimental task is described in this section. We trained the models on both the University of Computer Studies, Yangon (UCSY) corpus only (Ding et al., 2018 ) and evaluated the model with a portion of the Asian Language Treebank (ALT) corpora . The UCSY corpus consists of approximately 200,000 sentences, while the ALT validation and test sets include 1,000 sentences respectively. Due to the low resource nature of the Myanmar-English language pair and the added difficulty of domain adaptation, we trained additional models that compiled with Myth Corpus 11 as other resources for the task participation, and compared them with the models using training data provided by the shared task only.",
"cite_spans": [
{
"start": 240,
"end": 258,
"text": "(Ding et al., 2018",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "4.1"
},
{
"text": "For the mix-domain task, some noisy double quotes from training data were cleaned first. Then we tok-11 Available at https://github.com/alvations/ myth enized it using Pyidaungsu Myanmar Tokenizer 12 in syllable and word level tokenization for Myanmar sentences, and English sentences were fed directly to the SentencePiece model to produce subword units. Accordingly, we augmented the Myanmar data by three types (i) original, (ii) syllable, and (iii) word, where the training datasets could be built upon different combinations of these three types of Myanmar data, e.g., my (original+word)en, my (original+syllable+word)-en, etc. In practice, we simply replicated the English sentences accordingly to match the number of sentences for the augmented Myanmar data during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Processing",
"sec_num": "4.2"
},
{
"text": "We experimented with several Transformer models using Marian 13 toolkit (Junczys-Dowmunt et al., 2018) for my-en and en-my, respectively. We separately trained four models for both direction with the hyper-parameter setting shown in Table 7 , each of which corresponds to one combination of training data as mentioned in Section 4.3. Therefore, we had eight models to be trained in total, which can be denoted as: (i) my (original)\u2194en (BASE), (ii) my (original+word)\u2194en (WORD), (iii) my (original+syllable+word)\u2194en (ALL), and (iv) my (original+word)\u2194en with Myth corpus (WORD+). All experimental models in this task were trained on 3 GP104 machines with 4 GeForce GTX 1080 GPUs in each, and the experimental results will be shown and analyzed in the following section. Table 8 presents the results of our experiments on the given ALT test dataset evaluation for two directions. The baseline model BASE performed the poorest in the en\u2194my translation models solely trained on the original dataset. By using data augmentation, however, we observed significant improvements in the BLEU scores in en-my and my-en models that trained together with Myanmar word and syllable data. Interestingly, we also found that the BLEU score dropped down by 4.7 when syllable data was added during en-my model training (ALL vs. WORD), yet the similar performance decay did not appear in the my-en models. On the other hand, the models trained with additional Myth corpus (WORD+) outperformed the other three models for both directions because it could help on Source \u666f\u6c17\u306f\u3069\u3046\u3067\u3059 \u304a\u304b\u3052\u3055\u307e\u3067\u3001\u9806 \u9806 \u9806\u8abf \u8abf \u8abf\u3067 \u3067 \u3067\u3059 \u3059 \u3059\u3002 \u6700\u8fd1\u3001\u65b0\u3057\u3044\u65bd\u8a2d\u304c\u7a3c\u50cd\u958b\u59cb\u3057\u307e\u3057\u3066\u3001\u305d\u306e\u7ba1 \u7406\u3067\u5fd9\u3057\u304f\u3066\u3002\u3042\u3042\u3001\u305d\u308c\u3001\u5fa1\u793e\u306e\u30b5\u30a4\u30c8\u3067\u8aad\u307f\u307e\u3057\u305f\u3088\u3002\u304a \u304a \u304a\u3081 \u3081 \u3081\u3067 \u3067 \u3067\u3068 \u3068 \u3068\u3046 \u3046 \u3046\u3054 \u3054 \u3054\u3056 \u3056 \u3056\u3044 \u3044 \u3044\u307e \u307e \u307e\u3059 \u3059 \u3059\u3002 Reference How's business lately? It's been good. We recently commissioned a new facility so I've been busy managing that. I read about that on your company website. Congratulations.",
"cite_spans": [
{
"start": 72,
"end": 102,
"text": "(Junczys-Dowmunt et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 233,
"end": 240,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 769,
"end": 776,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "4.3"
},
{
"text": "How's the economy? Thank you, I'm good. I've been busy with that management since the new facility started recently. Oh, I read that on your website. Congratulations. HANjoint+ How's the economy? Thank you, it's fine. There's been a new facility running recently, and I've been busy managing it. Oh, I read it on your website. Thank you. mBARTdoc+ How's the economy going? It's going well thanks to you. We recently opened a new facility and I've been busy managing it. Oh, I read that on your website. Congratulations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HANbase+",
"sec_num": null
},
{
"text": "\u3057\u304b\u3057\u3001\u3069 \u3069 \u3069\u306e \u306e \u306e\u3088 \u3088 \u3088\u3046 \u3046 \u3046\u306a \u306a \u306a\u5546 \u5546 \u5546\u54c1 \u54c1 \u54c1\u306e \u306e \u306e\u53d6 \u53d6 \u53d6\u5f15 \u5f15 \u5f15\u3067 \u3067 \u3067\u3042 \u3042 \u3042\u3063 \u3063 \u3063\u3066 \u3066 \u3066\u3082 \u3082 \u3082\u3001\u4e00\u822c\u7684\u306b\u8f38\u51fa\u5165\u306e\u624b\u9806\u306f \u306f \u306f\u540c \u540c \u540c\u3058\u3067\u3059\u3002\u3042\u306a\u305f\u306e \u8077\u52d9\u306f\u4e3b\u306b\u3001\u5317\u7c73\u304b\u3089\u30a2\u30b8\u30a2\u3078\u306e\u8f38\u51fa\u54c1\u306b\u95a2\u3059\u308b\u8f38\u51fa\u66f8\u985e\u3092\u7528\u610f\u3059\u308b\u3053\u3068\u306b\u306a\u308a\u307e\u3059\u3002 \u3042\u3068\u3067\u3001\u5f53\u90e8 \u90e8 \u90e8\u7f72 \u7f72 \u7f72\u306e \u306e \u306e\u30a8 \u30a8 \u30a8\u30ec \u30ec \u30ec\u30a4 \u30a4 \u30a4\u30f3 \u30f3 \u30f3\u3055\u3093\u306b\u3084\u308a\u65b9\u3092\u8aac\u660e\u3057\u3066\u3082\u3089\u3044\u307e\u3059\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "But regardless of the product traded, the procedures for exporting or importing are generally the same. Your task will mainly be preparing export documents for products from North America going to Asia. Elaine in our department will teach you how it's done later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
},
{
"text": "However, even if it's a commodity exchange, it's the same procedure as export procedures. You will mainly prepare export documents for exports from North America. I will explain how Elaine will do it later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HANbase+",
"sec_num": null
},
{
"text": "But any product transaction is commonly the export process. You will mainly prepare export documents for exports from North America to Asia. I'm going to need you to explain how you do it later on in the department. mBARTdoc+ But regardless of the product deal, the standard export procedure is the same. You will be required to prepare export documents on exports from North America to Asia, mainly. I will have Elaine from our department explain how to do it later. the data hunger nature of low resource languages. Furthermore, our best BLEU results were achieved by the two WORD+ models, which already or nearly surpassed the shared task organizer's baseline results on the WAT learderboard. Our approach in this way of amplifying training data size gave the improvement of BLEU score while using a single Marian NMT model. We need further discovery by turning model hyper-parameters and/or different modeling approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HANjoint+",
"sec_num": null
},
{
"text": "Model BLEU ALT2 my-en BASE 6.9 ALT2 my-en WORD 11.3 ALT2 my-en ALL 12.9 ALT2 my-en WORD+ 14.2 ALT2 en-my BASE 14.9 ALT2 en-my WORD 22.1 ALT2 en-my ALL 17.4 ALT2 en-my WORD+ 24.4 Table 8 : Mixed-domain Task Results. \"+\" means the model was trained with additional Myth corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "We presented our submissions (team ID: goku20) to the WAT 2020 shared translation tasks in this paper. We trained Transformer-based NMT systems across different tasks. We found that additional training datasets from other resources could lead to substantial performance gains on smaller data sets. We also validated the capability of Transformers with pre-training in dealing with the sentencelevel and document-level tasks, especially when the data hungry problem appeared. Finally, we attempted data augmentation approaches on the lowresource language translation tasks and achieved outperforming experimental results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2020/baseline/dataPreparationJEp.html 2 http://nlp.ist.i.kyoto-u.ac.jp/EN/ index.php?JUMAN3 https://nlp.stanford.edu/software/ segmenter.shtml 4 https://bitbucket.org/eunjeon/ mecab-ko/ 5 http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2020/baseline/dataPreparationBPE.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://dl.fbaipublicfiles.com/ fairseq/models/mbart/mbart.CC25.tar.gz",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "is a union of JPCN{1,2,3} subsets 8 Human evaluation results of the JCPEP tasks are not yet visible as the time of this writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/ evaluation/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The BSD training and JESC corpus have been expanded into 6,927 and 959,399 tri-sentence segments, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/kaunghtetsan275/ pyidaungsu 13 https://marian-nmt.github.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards Burmese (Myanmar) morphological analysis: Syllable-based tokenization and part-of-speech tagging",
"authors": [
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Hnin",
"middle": [],
"last": "Thu Zar Aye",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Khin",
"middle": [
"Thandar"
],
"last": "Nwet",
"suffix": ""
},
{
"first": "Khin",
"middle": [],
"last": "Mar Soe",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP)",
"volume": "19",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenchen Ding, Hnin Thu Zar Aye, Win Pa Pa, Khin Thandar Nwet, Khin Mar Soe, Masao Utiyama, and Eiichiro Sumita. 2019. Towards Burmese (Myan- mar) morphological analysis: Syllable-based tok- enization and part-of-speech tagging. ACM Trans- actions on Asian and Low-Resource Language Infor- mation Processing (TALLIP), 19(1):5.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Burmese (Myanmar) treebank: Guildline and analysis",
"authors": [
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Sann",
"middle": [],
"last": "Su Su Yee",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Khin",
"middle": [],
"last": "Mar Soe",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TAL-LIP)",
"volume": "19",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenchen Ding, Sann Su Su Yee, Win Pa Pa, Khin Mar Soe, Masao Utiyama, and Eiichiro Sumita. 2020. A Burmese (Myanmar) treebank: Guildline and analysis. ACM Transactions on Asian and Low- Resource Language Information Processing (TAL- LIP), 19(3):40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NOVA: A feasible and flexible annotation system for joint tokenization and part-of-speech tagging",
"authors": [
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Transactions on Asian and Low-Resource Language Information Processing (TAL-LIP)",
"volume": "18",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenchen Ding, Masao Utiyama, and Eiichiro Sumita. 2018. NOVA: A feasible and flexible annotation system for joint tokenization and part-of-speech tagging. ACM Transactions on Asian and Low- Resource Language Information Processing (TAL- LIP), 18(2):17.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Marian: Fast neural machine translation in C++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Alham",
"middle": [],
"last": "Fikri Aji",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116- 121, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederick",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "OpenNMT: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.08210"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Document-level neural machine translation with hierarchical attention networks",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich",
"suffix": ""
},
{
"first": "Dhananjay",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2947--2954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neu- ral machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2947-2954, Brussels, Belgium. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the 6th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Nobushige",
"middle": [],
"last": "Doi",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation, Hong Kong. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Nobushige Doi, Yusuke Oda, Anoop Kunchukuttan, Shan- tipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2019. Overview of the 6th workshop on Asian trans- lation. In Proceedings of the 6th Workshop on Asian Translation, Hong Kong. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Overview of the 7th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian transla- tion. In Proceedings of the 7th Workshop on Asian Translation, Suzhou, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using the output embedding to improve language models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "157--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 157-163, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC)",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Britz",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Pryzant, Y. Chung, D. Jurafsky, and D. Britz. 2018. JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sarah's participation in wat 2019",
"authors": [
{
"first": "Raymond Hendy",
"middle": [],
"last": "Susanto",
"suffix": ""
},
{
"first": "Ohnmar",
"middle": [],
"last": "Htun",
"suffix": ""
},
{
"first": "Liling",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "152--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond Hendy Susanto, Ohnmar Htun, and Liling Tan. 2019. Sarah's participation in wat 2019. In Pro- ceedings of the 6th Workshop on Asian Translation, pages 152-158.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"text": "JPO models settings comparison.",
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "Comparison of models settings on the BSD tasks.",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Comparisons of HAN and mBART best models results in the BSD task. The results shown with + used JESC auxiliary corpus during training.",
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"html": null,
"num": null,
"text": "Translation examples: Comparison of the HAN and mBART models for BSD ja-en task. All the results shown here are obtained from single model decoding.",
"type_str": "table",
"content": "<table><tr><td>Vocabulary size</td><td>380k</td></tr><tr><td>Embedding dim.</td><td>1024</td></tr><tr><td>Tied embeddings</td><td>Yes</td></tr><tr><td>Transformer FFN dim.</td><td>4096</td></tr><tr><td>Attention heads</td><td>8</td></tr><tr><td>En/Decoder layers</td><td>4</td></tr><tr><td>Label smoothing</td><td>0.1</td></tr><tr><td>Dropout</td><td>0.1</td></tr><tr><td>Batch size</td><td>12</td></tr><tr><td colspan=\"2\">Attention weight dropout 0.1</td></tr><tr><td colspan=\"2\">Transformer FFN dropout 0.1</td></tr><tr><td>Learning rate</td><td>1e \u22124</td></tr></table>"
},
"TABREF9": {
"html": null,
"num": null,
"text": "Mixed-domain model parameter settings",
"type_str": "table",
"content": "<table/>"
}
}
}
}