ACL-OCL / Base_JSON /prefixW /json /wat /2021.wat-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:51.776601Z"
},
"title": "Bering Lab's Submissions on WAT 2021 Shared Task",
"authors": [
{
"first": "Heesoo",
"middle": [],
"last": "Park",
"suffix": "",
"affiliation": {
"laboratory": "Bering Lab",
"institution": "",
"location": {
"country": "South Korea"
}
},
"email": "[email protected]"
},
{
"first": "Dongjun",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "Bering Lab",
"institution": "",
"location": {
"country": "South Korea"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the Bering Lab's submission to the shared tasks of the 8th Workshop on Asian Translation (WAT 2021) on JPC2 and NICT-SAP. We participated in all tasks on JPC2 and IT domain tasks on NICT-SAP. Our approach for all tasks mainly focused on building NMT systems in domain-specific corpora. We crawled patent document pairs for English-Japanese, Chinese-Japanese, and Korean-Japanese. After cleaning noisy data, we built parallel corpus by aligning those sentences with the sentence-level similarity scores. Also, for SAP test data, we collected the OPUS dataset including three IT domain corpora. We then trained transformer on the collected dataset. Our submission ranked 1 st in eight out of fourteen tasks, achieving up to an improvement of 2.87 for JPC2 and 8.79 for NICT-SAP in BLEU score .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the Bering Lab's submission to the shared tasks of the 8th Workshop on Asian Translation (WAT 2021) on JPC2 and NICT-SAP. We participated in all tasks on JPC2 and IT domain tasks on NICT-SAP. Our approach for all tasks mainly focused on building NMT systems in domain-specific corpora. We crawled patent document pairs for English-Japanese, Chinese-Japanese, and Korean-Japanese. After cleaning noisy data, we built parallel corpus by aligning those sentences with the sentence-level similarity scores. Also, for SAP test data, we collected the OPUS dataset including three IT domain corpora. We then trained transformer on the collected dataset. Our submission ranked 1 st in eight out of fourteen tasks, achieving up to an improvement of 2.87 for JPC2 and 8.79 for NICT-SAP in BLEU score .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The WAT 2021 Shared Task (Nakazawa et al., 2021) 1 focuses a comprehensive set of machine translations on Asian languages. They gather and share the resources and knowledge about Asian language translation through a variety of tasks on the broad topics such as documentlevel translation, multi-modal translation, and domain adaptation. Among those tasks, we participated on two tasks: (1) JPO Patent Corpus (JPC2), a translation task on patent corpus of Japanese \u2194 English/Korean/Chinese, and (2) NICT-SAP IT domain, a translation task on software documentation corpus of English \u2194 Hindi/Indonesian/Malaysian/Thai.",
"cite_spans": [
{
"start": 25,
"end": 48,
"text": "(Nakazawa et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "According to the Table 1 , both two corpora mostly consist of technical terms. Specifically, jargon such as \"acrylic acid\" from JPC2 is not com-",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 24,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "JP \u305d\u306e\u4e2d\u3067\u3082\u3001\u30a2\u30af\u30ea\u30eb\u9178\u3092\u597d\u9069\u306b\u4f7f \u7528\u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002 EN Among them, an acrylic acid can be preferably used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JPC2",
"sec_num": null
},
{
"text": "NICT-SAP IT domain ID Spesifikasi Antarmuka Pemindaian Virus (NW-VSI) EN Virus Scan Interface (NW-VSI) Specification Table 1 : Sample sentences of JPC2 and NICT-SAP.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "JPC2",
"sec_num": null
},
{
"text": "monly used in everyday life. Similarly, terminology \"Virus Scan Interface\" from NICT-SAP cannot be easily found on the general corpus. Therefore, we focused on domain adaptation for both tasks. Our approach begins with collecting rich and clean sentence pairs from web and public dataset. For JPC2, we crawled the patent documents from web for each language pairs then built parallel corpus by pairing each sentence with the similarity scores between source and target sentence representation vectors. For NICT-SAP IT domain, we collected public dataset, OPUS (Tiedemann, 2012) , and weighted the IT corpus among those corpus while training. In addition to the rich and clean additional corpus, we chose transformer (Vaswani et al., 2017) , broadly recognized as a strong machine translation system.",
"cite_spans": [
{
"start": 560,
"end": 577,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 716,
"end": 738,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "JPC2",
"sec_num": null
},
{
"text": "Our method obtained the new state-of-the-art results on four out of six JPC2 tasks, especially amounting to 2.87 absolute improvement on BLEU scores for Japanese to Korean translation. To validate the effect of the additional data, we conducted the ablation study on Korean \u2192 Japanese data. Furthermore, our models ranked first place on four out of eight NICT-SAP IT domain tasks, achieving 8.79 improvement for Indonesian to English. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JPC2",
"sec_num": null
},
{
"text": "We participate JPO Patent Corpus (JPC2) and SAP's IT translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "2"
},
{
"text": "JPO Patent Corpus JPC2 consists of Chinese-Japanese, Korean-Japanese, and English-Japanese patent description parallel corpus (Nakazawa et al., 2021) . Each corpus consists of 1M parallel sentences with four sections (chemistry, electricity, mechanical engineering, and physics).",
"cite_spans": [
{
"start": 126,
"end": 149,
"text": "(Nakazawa et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Corpus",
"sec_num": "2.1"
},
{
"text": "SAP's IT Corpus SAP software documentation corpus (Buschbeck and Exel, 2020) is designed to test the performance of multilingual NMT systems in extremely low-resource conditions (Nakazawa et al., 2021) . The dataset consists of Hindi(Hi) / Thai(Th) / Malay(Ms) / Indonesian(Id) \u2194 English software documentation parallel corpus. The number of parallel sentences of each corpus is described in Table 2 . Table 3 : Statistics of additional parallel sentences. \"Avg. Len\" represents the average of the number of characters per Japanese sentence.",
"cite_spans": [
{
"start": 50,
"end": 76,
"text": "(Buschbeck and Exel, 2020)",
"ref_id": "BIBREF2"
},
{
"start": 178,
"end": 201,
"text": "(Nakazawa et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 392,
"end": 399,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 402,
"end": 409,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parallel Corpus",
"sec_num": "2.1"
},
{
"text": "The official evaluation metrics are BLEU (Papineni et al., 2002) , RIBES (Isozaki et al., 2010) , and AMFM (Banchs et al., 2015) .",
"cite_spans": [
{
"start": 41,
"end": 64,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF11"
},
{
"start": 73,
"end": 95,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF5"
},
{
"start": 107,
"end": 128,
"text": "(Banchs et al., 2015)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metric",
"sec_num": "2.2"
},
{
"text": "In this section we introduce our approach for two tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "3"
},
{
"text": "For JPC2 tasks, we trained the models on combination of the given train dataset (Table 2) and web-crawled dataset (Table 3) . For NICT-SAP tasks, we trained the models on OPUS dataset with IT domain corpus weighted (Table 4) . For both tasks, the models were evaluated on the given test dataset (Table 2) . Patent crawling data Additional data for JPC2 was obtained from WIPO 2 through website crawling. The JPC2 data (including the evaluation data) consists only of description section in each document. Since our approach is to collect the data which is very close to the task domain, we filtered out all sections but the description section to avoid the redundant noise while training the model.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 89,
"text": "(Table 2)",
"ref_id": "TABREF1"
},
{
"start": 114,
"end": 123,
"text": "(Table 3)",
"ref_id": null
},
{
"start": 215,
"end": 224,
"text": "(Table 4)",
"ref_id": "TABREF3"
},
{
"start": 295,
"end": 304,
"text": "(Table 2)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data crawling and preprocessing",
"sec_num": "3.1"
},
{
"text": "To pair each sentence, we first split the whole description into sentences and encoded each sentence to a representation vector. As a sentence encoder, we used LASER 3 for Ko-Ja and Universal Sentence Encoder 4 (Cer et al., 2018) for the other pairs. We then measured the cosine similarity between each sentence pair and filtered out the pairs whose score was under threshold.",
"cite_spans": [
{
"start": 211,
"end": 229,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data crawling and preprocessing",
"sec_num": "3.1"
},
{
"text": "OPUS data (Tiedemann, 2012) Since the NICT-SAP IT domain translation task does not provide the train dataset, we collected it from public dataset including GNOME, KDE4, Ubuntu, En-X GNOME KDE4 Ubuntu ELRC TANZIL Opensubtitles tico-19 QED Tatoeba HI 145,706 97,227 11,309 245 187,080 93,016 3,071 11,314 10,900 ID 47,234 14,782 96,456 2,679 . 9,268,181 3,071 274,581 9,967 MS 299,601 87,122 120,016 1,697 . 1,928,345 3,071 79,697 . TH 78 70,634 3,785 . . 3,281,533 . 264,677 1,162 Tateoba, Tanzil, QED (Abdelali et al., 2014) , tico-19, OpenSubtitles, ELRC. We downloaded all the dataset from OPUS site. Table 4 shows the statistics of the data obtained from the site.",
"cite_spans": [
{
"start": 10,
"end": 27,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 526,
"end": 549,
"text": "(Abdelali et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 238,
"end": 490,
"text": "Tatoeba HI 145,706 97,227 11,309 245 187,080 93,016 3,071 11,314 10,900 ID 47,234 14,782 96,456 2,679 . 9,268,181 3,071 274,581 9,967 MS 299,601 87,122 120,016 1,697 . 1,928,345 3,071 79,697 . TH 78 70,634 3,785 . . 3,281,533 .",
"ref_id": "TABREF1"
},
{
"start": 628,
"end": 635,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data crawling and preprocessing",
"sec_num": "3.1"
},
{
"text": "For the NMT system, we used OpenNMT-py (Klein et al., 2017) 5 to train Transformer (Vaswani et al., 2017) architecture models with several different parameter configurations for each task. Our models have 6 encoder layers, 6 decoder layers, a sequence length of 512 for both source and target side, 8 attention heads with an attention dropout of 0.1. Each model was trained on Nvidia RTX 3090 Ti (24GB). We used an effective batch size of 2048 tokens. We chose Adam (Kingma and Ba, 2014) optimizer with a learning rate of 1, warm-up steps 8000, label smoothing 0.1 and token-level layer normalization. We set the data type to the floating point 32 and applied relative positional encoding (Shaw et al., 2018) to consider the pairwise relationships between the input elements. We changed the hidden layer size from 512 to 2048 and the feed forward networks from 2048 to 4096 for finding the model to perform best. We saved the checkpoint every 20,000 steps and choose the model which performed best on the validation set. We used google sentencepiece library 6 to train separate SentencePiece models (Kudo and Richardson, 2018) on the source and target sides, for each language. We trained a regularized unigram model (Kudo, 2018) . For JPC2, we set a vocabulary size of 32,000 for Japanese and Chinese and 16,000 for Korean and English. We set a character coverage to 0.995. For NICT-SAP, we set a vocabulary size of 8,000 for English and Malaysian and 16,000 for Hindi, Indonesian and Thai. We set a character coverage to 0.995. While training sentence piece models, we used only given train dataset and only IT domain (Ubuntu, GNOME, 5 https://github.com/OpenNMT/OpenNMT-py 6 https://github.com/google/ sentencepiece ",
"cite_spans": [
{
"start": 39,
"end": 59,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 83,
"end": 105,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 689,
"end": 708,
"text": "(Shaw et al., 2018)",
"ref_id": "BIBREF12"
},
{
"start": 1099,
"end": 1126,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 1217,
"end": 1229,
"text": "(Kudo, 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model configuration",
"sec_num": "3.2"
},
{
"text": "We participated in JPC2 and NICT-SAP (IT domain) tasks. JPC2 consists of English-Japanese (En-Ja), Chinese-Japanese (Zh-Ja) and Korean-Japanese (Ko-Ja). NICT-SAP consists of English-Hindi (En-Hi), English-Indonesian (En-Id), English-Malaysian (En-Ms) and English-Thai (En-Th). Table 5 shows overall results on JPC2 dataset. Our models ranked first in all the tasks whose input is Japanese. Across overall process, we weighted the given dataset to the crawled dataset by oversampling. English -Japanese We collected the additional Table 7 : Ablation studies for JPC2 Ja \u2192 Ko subtask.\"w\" and \"wo\" represents the BLEU score of the model trained with and without the additional dataset, repectively. \"Avg. Len\" represents the average of the number of characters per Japanese sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 284,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 530,
"end": 537,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result",
"sec_num": "4"
},
{
"text": "data 20 times more than the given training dataset. We noticed that the average of the sentence length in the collected dataset is much longer than the given dataset. This represents that the collected dataset is quite different from original data. Therefore, we weighted the given train dataset five times for Ja \u2192 En and two times for En \u2192 Ja task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JPC2 patent translation task",
"sec_num": "4.1"
},
{
"text": "In the inference time, we used the seven independent models ensemble for Ja \u2192 En and the six independent models for En \u2192 Ja task. We selected each model's checkpoint which performed best in the validation data. We set the beam size to 7. The model ensemble method led to a performance improvement by 1.25 and 0.85 of the BLEU score for Ja \u2192 En and En \u2192 Ja, respectively. The best performance of our model was a BLEU score of 47.44 in the En \u2192 Ja and 45.13 in the Ja \u2192 En task. Korean -Japanese Our collected data 13 times more than the given one. Similar to En \u2194 Ja, we weighted the original dataset three times for both Ja \u2192 Ko and Ko \u2192 Ja. In the inference time, we used the five independent models ensemble for both Ja \u2192 Ko and for Ko \u2192 Ja. We set the beam size to 7. The best performance of our model was a BLEU score of 75.82 for the Ko \u2192 Ja task and 76.68 for the Ja \u2192 Ko task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "JPC2 patent translation task",
"sec_num": "4.1"
},
{
"text": "To validate the effect of additional data, we conducted an ablation studies on the Ja \u2192 Ko task. Table 7 shows the sub-tasks in the JPC2 dataset. Each test data in JPC2 can be split according to the publish year and the way they were collected. Test-n1 consists of the patent documents published between 2011 and 2013. Test-n2 and test-n3 consist of patent documents between 2016 and 2017, but test-n3 are manually created by translating source sentences. While the model trained with additional data outperforms the other model in test-n1 and test-n2, it shows poor performance on test-n3 which consists of manual translations. Chinese -Japanese Similar to En \u2194 Ja and Ko \u2194 Ja, we weighted the original dataset two times for both Ja \u2192 Zh and three times for Zh \u2192 Ja. In the inference time, we used the five independent models ensemble for Ja \u2192 Zh and seven models for Zh \u2192 Ja. We set the beam size to 7. The best performance of our model was a BLEU score of 51.28 in the Zh \u2192 Ja dataset and 42.92 in the Ja \u2192 Zh dataset. Table 6 shows the overall results on NICT-SAP IT domain. While we trained transformer on OPUS dataset from scratch, most of the high-ranked models used the pre-trained mBART (Chipman et al., 2021) and finetuned it. Therefore, others got benefit from the multilingualism and gigantic additional corpus. Even though we used relatively small data, we achieved the state-of-the-art scores on the four out of eight tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 7",
"ref_id": null
},
{
"start": 1022,
"end": 1029,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "JPC2 patent translation task",
"sec_num": "4.1"
},
{
"text": "For all language pairs, we weighted IT dataset (Ubuntu, GNOME, KDE4) 2.5 times to the general one. We saved the checkpoint at every 20000 step, then submitted the models which showed the best performance for validation set. Except for Thai, our models ranked first on the sub-tasks whose input is English. Furthermore, our models outperformed competitors on En \u2194 Id, achieving an improvement of 7.83 for En \u2192 Id and 8.79 for Id \u2192 En dataset. We used relatively rich amount of dataset in this subtask. In contrast, on the En \u2194 Th sub-task, our model performed relatively poor since we used small amount of data to train it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NICT-SAP IT domain translation task",
"sec_num": "4.2"
},
{
"text": "In this work, we described the Bering Lab's submission to the WAT 2021 shared tasks. We collected the in-domain dataset for both JPC2 and NICT-SAP tasks and built transformer-based MT systems on those corpora. which were trained on given train dataset and additional crawled patent data. Our models ranked first place in eight out of fourteen tasks, amounting a high improvements for both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://patentscope.wipo.int/search/ en/search.jsf 3 https://github.com/facebookresearch/ LASER 4 https://tfhub.dev/google/ universal-sentence-encoder/3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The amara corpus: Building parallel language resources for the educational domain",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abdelali",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "14",
"issue": "",
"pages": "1044--1054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abdelali, Francisco Guzman, Hassan Sajjad, and Stephan Vogel. 2014. The amara corpus: Build- ing parallel language resources for the educational domain. In LREC, volume 14, pages 1044-1054.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adequacy-fluency metrics: Evaluating mt in the continuous space model framework",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rafael",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Banchs",
"suffix": ""
},
{
"first": "F D'",
"middle": [],
"last": "Luis",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Haro",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing",
"volume": "23",
"issue": "3",
"pages": "472--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rafael E Banchs, Luis F D'Haro, and Haizhou Li. 2015. Adequacy-fluency metrics: Evaluating mt in the continuous space model framework. IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing, 23(3):472-482.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A parallel evaluation data set of software documentation with document structure annotation",
"authors": [
{
"first": "Bianka",
"middle": [],
"last": "Buschbeck",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Exel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.04550"
]
},
"num": null,
"urls": [],
"raw_text": "Bianka Buschbeck and Miriam Exel. 2020. A paral- lel evaluation data set of software documentation with document structure annotation. arXiv preprint arXiv:2008.04550.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-C\u00e9spedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11175"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "mbart: Multidimensional monotone bart",
"authors": [
{
"first": "Edward",
"middle": [
"I"
],
"last": "Hugh A Chipman",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "George",
"suffix": ""
},
{
"first": "Thomas S",
"middle": [],
"last": "Mcculloch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shively",
"suffix": ""
}
],
"year": 2021,
"venue": "Bayesian Analysis",
"volume": "1",
"issue": "1",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugh A Chipman, Edward I George, Robert E McCul- loch, and Thomas S Shively. 2021. mbart: Mul- tidimensional monotone bart. Bayesian Analysis, 1(1):1-30.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic eval- uation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, pages 944-952.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "OpenNMT: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstra- tions, pages 67-72.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Subword regularization: Improving neural network translation models with multiple subword candidates",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.10959"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. arXiv preprint arXiv:1804.10959.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06226"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the 8th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Higashiyama",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kaori",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ond\u0159ej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, and Sadao Oda, Yusuke Kurohashi. 2021. Overview of the 8th work- shop on Asian translation. In Proceedings of the 8th Workshop on Asian Translation, Bangkok, Thailand. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Self-attention with relative position representations",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.02155"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. arXiv preprint arXiv:1803.02155.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Lrec",
"volume": "2012",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Lrec, volume 2012, pages 2214- 2218.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Data statistics."
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Statistics of additional parallel sentences."
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Sub-task BLEU AMFM Rank</td></tr><tr><td colspan=\"2\">En \u2192 Hi 37.23</td><td>0.81</td><td>1 of 9</td></tr><tr><td colspan=\"2\">Hi \u2192 En 34.48</td><td>0.80</td><td>4 of 9</td></tr><tr><td>En \u2192 Id</td><td>53.22</td><td>0.85</td><td>1 of 9</td></tr><tr><td>Id \u2192 En</td><td>53.49</td><td>0.85</td><td>1 of 9</td></tr><tr><td colspan=\"2\">En \u2192 Ms 45.96</td><td>0.86</td><td>1 of 9</td></tr><tr><td colspan=\"2\">Ms \u2192 En 38.42</td><td>0.81</td><td>2 of 9</td></tr><tr><td colspan=\"2\">En \u2192 Th 34.52</td><td>0.70</td><td>5 of 9</td></tr><tr><td colspan=\"2\">Th \u2192 En 25.07</td><td>0.73</td><td>2 of 9</td></tr></table>",
"num": null,
"html": null,
"text": "Official rank and BLEU scores for JPC2 tasks on Test-n dataset."
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Rank and BLEU/AMFM scores for NICT-SAP IT tasks on leader-board. The rank is scored by BLEU score."
}
}
}
}