|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:34:06.486622Z" |
|
}, |
|
"title": "Korean-to-Japanese Neural Machine Translation System using Hanja Information", |
|
"authors": [ |
|
{ |
|
"first": "Hwichan", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tokyo Metropolitan University", |
|
"location": { |
|
"addrLine": "6-6 Asahigaoka", |
|
"postCode": "191-0065", |
|
"settlement": "Hino", |
|
"region": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tosho", |
|
"middle": [], |
|
"last": "Hirasawa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tokyo Metropolitan University", |
|
"location": { |
|
"addrLine": "6-6 Asahigaoka", |
|
"postCode": "191-0065", |
|
"settlement": "Hino", |
|
"region": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mamoru", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tokyo Metropolitan University", |
|
"location": { |
|
"addrLine": "6-6 Asahigaoka", |
|
"postCode": "191-0065", |
|
"settlement": "Hino", |
|
"region": "Tokyo", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we describe our TMU neural machine translation (NMT) system submitted for the Patent task (Korean\u2192Japanese) of the 7th Workshop on Asian Translation (WAT 2020, Nakazawa et al., 2020). We propose a novel method to train a Korean-to-Japanese translation model. Specifically, we focus on the vocabulary overlap of Korean Hanja words and Japanese Kanji words, and propose strategies to leverage Hanja information. Our experiment shows that Hanja information is effective within a specific domain, leading to an improvement in the BLEU scores by +1.09 points compared to the baseline.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we describe our TMU neural machine translation (NMT) system submitted for the Patent task (Korean\u2192Japanese) of the 7th Workshop on Asian Translation (WAT 2020, Nakazawa et al., 2020). We propose a novel method to train a Korean-to-Japanese translation model. Specifically, we focus on the vocabulary overlap of Korean Hanja words and Japanese Kanji words, and propose strategies to leverage Hanja information. Our experiment shows that Hanja information is effective within a specific domain, leading to an improvement in the BLEU scores by +1.09 points compared to the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The Japanese and Korean languages have a strong connection with Chinese owing to cultural and historical reasons (Lee and Ramsey, 2011) . Many words in Japanese are composed of Chinese characters called Kanji. By contrast, Korean uses the Korean alphabet called Hangul to write sentences in almost all cases. However, Sino-Korean 1 (SK) words, which can be converted into Hanja words, account for 65 percent of the Korean lexicon (Sohn, 2006) . Table 1 presents an example of conversions of SK words into Hanja, which are compatible with Japanese Kanji words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 135, |
|
"text": "(Lee and Ramsey, 2011)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 442, |
|
"text": "(Sohn, 2006)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 452, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, several studies have suggested that overlapping tokens between the source and target languages can improve the translation accuracy (Sennrich et al., 2016; Zhang and Komachi, 2019) . Park and Zhao (2019) trained a Korean-to-Chinese translation model by converting Korean SK words from Hangul into Hanja to increase the vocabulary overlap.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 168, |
|
"text": "(Sennrich et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 193, |
|
"text": "Zhang and Komachi, 2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 216, |
|
"text": "Park and Zhao (2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In other words, the meaning of a vocabulary overlap on NMT is that each corresponding word's 1 Sino-Korean (SK) refers to Korean words of Chinese origin. embeddings are the same. Conneau et al. (2018) and Lample and Conneau (2019) improved translation accuracy by making embeddings between source words and their corresponding target words closer. From this fact, we hypothesize that if the embeddings of each corresponding word are closer, the translation accuracy will improve. Based on this hypothesis, we propose two approaches to train a translation model. First, we follow Park and Zhao (2019) 's method to increase the vocabulary overlap to improve the Korean-to-Japanese translation accuracy. Therefore, we perform Hangul to Hanja conversion pre-processing before training the translation model. Second, we propose another approach to obtain Korean and Japanese embeddings that are closer to Korean SK words and their corresponding Japanese Kanji words. SK words written in Hangul and their counterparts in Japanese Kanji are superficially different, but we make both embeddings close by using a loss function when training the translation model.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 94, |
|
"text": "1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 200, |
|
"text": "Conneau et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 230, |
|
"text": "Lample and Conneau (2019)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 599, |
|
"text": "Park and Zhao (2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, in this study, we used the Japan Patent Office Patent Corpus 2.0, which consists of four domains, namely chemistry (Ch), electricity (El), mechanical engineering (Me), and physics (Ph), whose training, development, and test-n1 2 data have domain information. Our methods are more effective when the terms are derived from Chinese characters; therefore, we expect that the effect will be different per domain. This is because Korean (Hangul) \uadf8\ub798\uc11c N\uc758 \ud568\uc720\ub7c9\uc740 0.01 %\uc774\ud558\ub85c \ud55c\uc815\ud55c\ub2e4. Korean (Hanja) \uadf8\ub798\uc11c N\uc758 \u542b\u6709\uf97e\uc740 0.01 %\u4ee5\u4e0b\ub85c \u9650\u5b9a\ud55c\ub2e4. Japanese \u305d\u306e\u305f\u3081 N\u306e \u542b\u6709\uf97e\u306f 0.01 %\u4ee5\u4e0b\u306b \u9650\u5b9a\u3059\u308b\u3002 English Therefore, N content 0.01 % or less limit. there are domains in which there are many terms derived from Chinese characters. Therefore, to examine which Hanja information is the most useful in each domain, we perform a domain adaptation by fine-tuning the model pre-trained by all training data using domain-specific data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this study, we examine the effect of Hanja information and domain adaptation in a Koreanto-Japanese translation. The main contributions of this study are as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We demonstrate that Hanja information is effective for Korean to Japanese translations within a specific domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 In addition, our experiment shows that the translation model using Hanja information tends to translate literally.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several studies have been conducted on Korean-Japanese neural machine translation (NMT). Park et al. (2019) trained a Korean-to-Japanese translation model using a transformer-based NMT system with relative positioning, back-translation, and multi-source methods. There have been other attempts that combine statistical machine translation (SMT) and NMT (Ehara, 2018; Hyoung-Gyu Lee and Lee, 2015) . Previous studies on Korean-Japanese NMT did not use Hanja information, whereas we train a Korean-to-Japanese translation model using data in which SK words were converted into Hanja words. Sennrich et al. (2016) proposed byte-pair encoding (BPE), i.e., a sub-word segmentation method, and suggested that overlapping tokens by joint BPE is more effective for training the translation model between European language pairs. Zhang and Komachi (2019) increased the overlap of tokens between Japanese and Chinese by decomposing Japanese Kanji and Chinese characters into an ideograph or stroke level to improve the accuracy of Chinese-Japanese unsupervised NMT. Following previous studies, we convert Korean SK words from Hangul into Hanja to increase the vocabulary overlap. Conneau et al. (2018) proposed a method to build a bilingual dictionary by aligning the source and target word embedding spaces using a rotation matrix. They showed that word-by-word translation using the bilingual dictionary can improve the translation accuracy in low-resource language pairs. Lample and Conneau (2019) improved the translation accuracy by fine-tuning a pre-trained cross-lingual language model (XLM). The authors observed that the bilingual word embeddings in XLM are similar. Based on these facts, we hypothesize that if the embeddings of each corresponding word become closer, the translation accuracy will improve. In this study, we use Hanja information to make the embeddings of each corresponding word closer to each other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 366, |
|
"text": "(Ehara, 2018;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 396, |
|
"text": "Hyoung-Gyu Lee and Lee, 2015)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 610, |
|
"text": "Sennrich et al. (2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 845, |
|
"text": "Zhang and Komachi (2019)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1170, |
|
"end": 1191, |
|
"text": "Conneau et al. (2018)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1465, |
|
"end": 1490, |
|
"text": "Lample and Conneau (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Some studies have focused on exploiting Hanja information. Park and Zhao (2019) focused on the linguistic connection between Korean and Chinese. They used the parallel data in which the Korean sentences of SK words were converted into Hanja to train a translation model and demonstrated that their method improves the translation quality. In addition, they showed that conversion into Hanja helped translate the homophone of SK words. Yoo et al. 2019proposed an approach of training Korean word representations using the data in which SK words were converted into Hanja words. They demonstrated the effectiveness of the representation learning method on several downstream tasks, such as a news headline generation and sentiment analysis. To train the Korean-to-Japanese translation model, we combine these two approaches using Hanja information for training the translation model and word embeddings to improve the translation quality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 79, |
|
"text": "Park and Zhao (2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In domain adaptation approaches for NMT, Bapna and Firat (2019); Gu et al. 2019trained an NMT model pre-trained with massive parallel data and retrained it with small parallel data within the target domain. In addition, Hu et al. 2019 proposed an unsupervised adaptation method that retrains a pre-trained NMT model using pseudoin-domain data. In this study, we perform domain adaptation by fine-tuning a pre-trained NMT model with domain-specific data to examine whether Hanja information is useful, and if so, in which domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our model is based on the transformer architecture (Vaswani et al., 2017) , and we share the embedding weights between the encoder input, decoder input, and output to make better use of the vocabulary overlap between Korean and Japanese. We do not use language embedding (Lample and Conneau, 2019) to distinguish the source and target languages. We propose two models using the Hanja information described below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 73, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 271, |
|
"end": 297, |
|
"text": "(Lample and Conneau, 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Information", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We expect that the translation accuracy will improve by converting Korean SK words into Hanja to increase the source and target vocabulary overlap. In the Hanja-conversion model, we converted SK words written in Hangul into Hanja via preprocessing. Table 2 presents an example of the Hanja conversion. This conversion can increase the number of superficially matching words with Japanese sentences. We trained the translation model after the conversion.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 256, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Conversion", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the Hanja-loss model, we make the embeddings of the SK word and its corresponding Japanese", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Loss", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Kanji word closer to each other. We use a loss function to achieve this goal as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Loss", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L = L T + N n=1 (1 \u2212 Sim(E(S n ), E(K n )) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Loss", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where L is the loss function per-batch and L T is the loss of the transformer architecture. In addition, S and K are the lists of SK words and its corresponding Japanese Kanji words in the batch, respectively, and N is the length of the S and K lists (e.g., when the sentence is the example of Table 2 and the batch size is one, S = (\ud568\uc720\ub7c9, \uc774 \ud558, \ud55c\uc815), K = (\u542b\u6709\uf97e, \u4ee5\u4e0b, \u9650\u5b9a) and N = 3.). Here, E is a function that converts words into embedding, and Sim is a cosine similarity function. Therefore, the Hanja-loss function (Equation 1) decreases when the SK word and Japanese Kanji word vectors become more similar. We extract Kanji words in Japanese sentences to obtain K, and then normalize Kanji into traditional Chinese and convert it into Hangul using a Chinese character into Hangul conversion tool. If the conversion tool cannot convert normalized Kanji into Hangul, we remove the Kanji word from K. To obtain S, we search for the same Hangul words from the parallel Korean sentence. The reason for using the Kanji-to-Hangul conversion is the ambiguity of Hangul-to-Hanja conversion. Conversion of Kanji into Hangul is mostly unique 3 . For example, the SK word \"\uc0b0\" can be converted into \"\u5c71 (mountain)\" or \"\u9178 (acid)\" in Hanja and Kanji, respectively, and the SK word into Hanja word conversion has certain ambiguity. By contrast, the Kanji word \"\u5c71\" can be converted uniquely into the SK word \"\uc0b0.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "NMT with Hanja Loss", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We examine the effect of domain adaptation, which uses domain-specific data for retraining the pretrained model trained by all training data. We translate the test data for each domain using a domainspecific translation model. For training and validation, we split the training and development data into four domains: chemistry, electricity, mechanical engineering, and physics using domain information. We use these data to build domain-specific translation models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For testing, we use the domain information annotated with the test-n1 data. However, the test-n2 and test-n3 data do not have domain information. Therefore, we train a domain prediction model by fine-tuning Korean or Japanese BERT (Devlin et al., 2019) using the labeled training data of the Japan Patent Office Patent Corpus 2.0 to predict the domain information of test-n2 and test-n3 data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 252, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the fairseq 4 implementation of the transformer architecture for the baseline model and the Hanja-conversion model and extend the implementation for the Hanja-loss model. Table 7 presents some specific hyperparameters that are used in all models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 185, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To train the domain prediction model for domain adaptation (Section 4), we used the Bidi-rectionalWordPiece tokenizer, character model of KR-BERT 5 (Lee et al., 2020) as the Korean BERT and the bert-base-japanese-whole-word-masking 4 https://github.com/pytorch/fairseq 5 https://github.com/snunlp/KR-BERT model 6 as the Japanese BERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 166, |
|
"text": "(Lee et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To train the Korean-to-Japanese translation model, we used the Korean\u2194Japanese dataset of the Japan Patent Office Patent Corpus 2.0, which consists of training, development, test-n, test-n1, test-n2, and test-n3 data. We apply the following preprocess for each model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We tokenize Korean sentences using MeCab 7 with the mecab-ko dictionary 8 , and Japanese sentences with the IPA dictionary. After tokenization, we delete sentences with more than 200 words from the training data and apply shared byte-pair encoding (BPE, Sennrich et al., 2016) with a 30k merge operation size. Table 3 presents the statistics of the pre-processed data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 276, |
|
"text": "(BPE, Sennrich et al., 2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 317, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NMT with Hanja Conversion To train the Hanja-conversion model, we convert South Korean words into Hanja using a Hanja-tagger 9 and normalize Hanja and Kanji in parallel sentences into traditional Chinese using OpenCC 10 . After conversion, we apply the same pre-processing with the baseline model. Table 3 also presents the statistics of pre-processed data 11 for the Hanja-conversion model. In addition, Table 4 presents the statistics on the overlap of tokens between Korean and Japanese per-domain training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 298, |
|
"end": 305, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 412, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NMT with Hanja Loss We use the same preprocessed data as the baseline model. To extract the set of SK words and Kanji (Section 3.2) ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 131, |
|
"text": "(Section 3.2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We split the training, development, and test-n1 data using domain information, where the distribution of each domain is equal. We use the domain prediction model to split the test-n2 and test-n3 data. After splitting the data, we apply the same pre-processing as the baseline model. In addition, we use the same BPE model as the baseline model. Table 5 presents the test-n2 and test-3 data sizes of each domain. Table 6 presents the BLEU scores of a single model. We indicate the best scores in bold. In a single model, the Hanja-loss model achieves the highest scores for the test-n, test-n1, and test-n2 data. The test-n data reveals an improvement of +0.12 points from the baseline model. The test-n2 data indicate that the Hanjaconversion model cannot translate well on test-n2 data. The reason is that all words in the Korean sentences of test-n2 data are written without any segmentation, which causes many errors in Hanja conversion. Table 8 presents the BLEU and RIBES scores of the ensemble of four models and the domain adaptation ensemble models. When we use Japanese BERT to predict the domain, there is a slight improvement in the test-n and test-n2 data when compared with the baseline model 13 . In addition, Table 8 reveals no difference between the baseline and Hanja-loss models in the ensemble models. Table 9 presents the per-domain dev and test-n1 BLEU scores of each ensemble model. The Hanjaloss model is not better than the baseline model for all data, and there is no difference between the baseline and Hanja-loss models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 352, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 419, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 941, |
|
"end": 948, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 1224, |
|
"end": 1232, |
|
"text": "Table 8", |
|
"ref_id": "TABREF11" |
|
}, |
|
{ |
|
"start": 1322, |
|
"end": 1329, |
|
"text": "Table 9", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Domain Adaptation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Hanja-loss Model", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hanja-conversion Model versus", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this section, we compare the Hanja-conversion model and the Hanja-loss model. Table 6 indicates that the Hanja-loss model is better than the Hanjaconversion model in terms of the BLEU scores. The reason for this result is that Hangul into Hanja word conversion errors reduce the translation accuracy in the Hanja-conversion model.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 88, |
|
"text": "Table 6", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hanja-conversion Model versus", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In this section, we compare the baseline model and the Hanja-loss model. Tables 6, 8 and 9 indicate no difference between the baseline and Hanja-loss models in terms of BLEU and RIBES scores. by WAT 2020 organizers. In human evaluation, the baseline model is better than the Hanja-loss model. However, the improvement in scores is less than +0.01 points. Our experiment using all the training data reveals that there is little difference between the baseline model and the Hanja-loss model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline Model versus Hanja-loss Model", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In this section, we examine the effect of the Hanjaloss model in the domain-specific data. Table 9 presents the BLEU scores of each model trained by domain-specific data. The Hanja-loss model achieves the highest scores in the Me and Ph domains. Specifically, the test data in the field of physics reveals an improvement of +1.09 points for the baseline model. By contrast, in the Ch domain, there are no improvements in either the domain adaptation model or the model trained by per-domain training data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 98, |
|
"text": "Table 9", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Domain in the Hanja-loss Model", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "In the domain adaptation model of the Hanja-loss model, the test data of Ch indicates a deteriora-tion of -1.25 points for the baseline model. As the reason for this result, we consider that Hanja information is not necessary for the Ch domain because there is more vocabulary overlap than the other domains even without Hanja conversion (Table 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 338, |
|
"end": 347, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLEU Scores", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Outputs Table 11 presents a successful output example of the Hanja-loss model. The baseline model cannot translate the word \"\ub2e8\uc0ac \uc12c\ub3c4,\" which means \"single yarn fiber,\" in the source sentence well, but the Hanja-loss model can translate it correctly. We also found that the Hanja-loss model tends to translate literally. In the output of Table 11 , the baseline model translates the word \"\uc0ac\uc6a9, \"which means \"use,\" into \"\u7528\u3044,\" whereas the Hanja-loss model translates it into \"\u4f7f\u7528.\" The word \"\uc0ac\uc6a9\" can translate into both \"\u7528\u3044\" and \"\u4f7f\u7528,\" but \"\u4f7f\u7528\" is the Hanja form of the SK word \"\uc0ac\uc6a9.\"", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 16, |
|
"text": "Table 11", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 343, |
|
"text": "Table 11", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLEU Scores", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 12 's output, the Hanja-loss model translates the word \"\uac1c,\" which means \"piece\" into \"\u500b,\" which is the Hanja form of the SK word of \"\uac1c,\" but the translated word in the reference is \"\u3064.\" Therefore, in the Hanja-loss model, if the reference sentence is not a literal translation, the BLEU scores are low.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 11, |
|
"text": "Table 12", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BLEU Scores", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we described our NMT system submitted to the Patent task (Korean\u2192Japanese) of the", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "PGA \u7e4a\u7dad \u3068\u3057\u3066 \u3001\u4f8b\u3048\u3070 \u3001\u5358 \u7cf8 \u7e4a\u5ea6 .... \u4f7f\u7528\u3059\u308b \u3053\u3068 \u304c \u3067\u304d\u308b \u3002 English As PGA fiber, we can use the single yarn fiber ...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reference", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PGA \uc12c\uc720 \ub85c\uc11c , \uc608 \ub97c \ub4e4 \uba74 \ub2e8\uc0ac \uc12c\ub3c4 ... \uc0ac\uc6a9 \ud560 \uc218 \uc788 \ub2e4 . Source (Hanja) PGA \u7e4a\u7dad \ub85c\uc11c , \u4f8b \ub97c \ub4e4 \uba74 \u5358\u7cf8 \u7e4a\u5ea6 ... \u4f7f\u7528 \ud560 \uc218 \uc788 \ub2e4 . Baseline PGA \u7e4a\u7dad \u3068\u3057\u3066 \u306f \u3001\u4f8b\u3048\u3070 \u3001\u5358 \u7c92 \u5ea6 ... \u7528\u3044 \u3066 \u3082 \u3088\u3044 \u3002 Hanja-loss PGA \u7e4a\u7dad \u3068\u3057\u3066 \u3001\u4f8b\u3048\u3070 \u3001\u5358 \u7cf8 \u7e4a\u5ea6 ... \u4f7f\u7528\u3059\u308b \u3053\u3068 \u304c \u3067\u304d\u308b \u3002 7th Workshop on Asian Translation. We proposed novel methods for training the Korean-to-Japanese translation model, which uses Hanja information. We also demonstrated that the effect of our proposed method is different for all domain data. However, some SK words are polysemous. Our proposed method treats embeddings of such SK words the same and cannot address this problem. Therefore, the problem of polysemous words is a major challenge for our proposed method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this study, we focused on vocabulary overlap between Korean Hanja words and Japanese Kanji words. In addition, many Hanja and Kanji words are of Chinese origin. Therefore, in the future, we will attempt to develop a translation method that takes advantage of the vocabulary overlap among Korean, Japanese, and Chinese.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Source", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here, test-n, test-n1, test-n2, and test-n3 data, and test-n consist of test-n1, test-n2, and test-n3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Kanji-to-Hangul conversion has certain ambiguity owing to the initial sound rule in the Korean language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/cl-tohoku/bert-japanese 7 http://taku910.github.io/mecab/ 8 https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/ 9 https://github.com/kaniblu/hanja-tagger 10 https://github.com/BYVoid/OpenCC 11 The number of tokens differs from the baseline because we apply the tokenization after Hanja conversion.12 https://pypi.org/project/Hanja/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "However, in Korean-to-Japanese translation, we cannot use Japanese BERT to predict the domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work has been partly supported by the programs of the Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI) Grant Number 19K12099.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Simple, scalable adaptation for neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Bapna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Orhan", |
|
"middle": [], |
|
"last": "Firat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankur Bapna and Orhan Firat. 2019. Simple, scalable adaptation for neural machine translation. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word translation without parallel data", |
|
"authors": [ |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc'aurelio", |
|
"middle": [], |
|
"last": "Ranzato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ludovic", |
|
"middle": [], |
|
"last": "Denoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Herv\u00e9", |
|
"middle": [], |
|
"last": "J\u00e9gou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Interna- tional Conference on Learning Representations.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "SMT reranked NMT(2)", |
|
"authors": [ |
|
{ |
|
"first": "Terumasa", |
|
"middle": [], |
|
"last": "Ehara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 5th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Terumasa Ehara. 2018. SMT reranked NMT(2). In Proceedings of the 5th Workshop on Asian Transla- tion.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Improving domain adaptation translation with domain invariant and specific information", |
|
"authors": [ |
|
{ |
|
"first": "Shuhao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shuhao Gu, Yang Feng, and Qun Liu. 2019. Improving domain adaptation translation with domain invariant and specific information. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Domain adaptation of neural machine translation by lexicon induction", |
|
"authors": [ |
|
{ |
|
"first": "Junjie", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mengzhou", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graham", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime Carbonell. 2019. Domain adaptation of neural ma- chine translation by lexicon induction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "NAVER machine translation system for WAT", |
|
"authors": [ |
|
{ |
|
"first": "Jun-Seok Kim Hyoung-Gyu", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaesong", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chang-Ki", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2nd Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun-Seok Kim Hyoung-Gyu Lee, Jaesong Lee and Chang-Ki Lee. 2015. NAVER machine translation system for WAT 2015. In Proceedings of the 2nd Workshop on Asian Translation.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Crosslingual language model pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "Lample", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexis", |
|
"middle": [], |
|
"last": "Conneau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A history of the Korean language", |
|
"authors": [ |
|
{ |
|
"first": "Ki-Moon", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S Robert", |
|
"middle": [], |
|
"last": "Ramsey", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ki-Moon Lee and S Robert Ramsey. 2011. A history of the Korean language. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "KR-BERT: A smallscale Korean-specific language model", |
|
"authors": [ |
|
{ |
|
"first": "Sangah", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hansol", |
|
"middle": [], |
|
"last": "Jang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunmee", |
|
"middle": [], |
|
"last": "Baik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Suzi", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyopil", |
|
"middle": [], |
|
"last": "Shin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sangah Lee, Hansol Jang, Yunmee Baik, Suzi Park, and Hyopil Shin. 2020. KR-BERT: A small- scale Korean-specific language model. ArXiv, abs/2008.03979.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Overview of the 7th workshop on Asian translation", |
|
"authors": [ |
|
{ |
|
"first": "Toshiaki", |
|
"middle": [], |
|
"last": "Nakazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideki", |
|
"middle": [], |
|
"last": "Nakayama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenchen", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raj", |
|
"middle": [], |
|
"last": "Dabre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hideya", |
|
"middle": [], |
|
"last": "Mino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isao", |
|
"middle": [], |
|
"last": "Goto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Win", |
|
"middle": [ |
|
"Pa" |
|
], |
|
"last": "Pa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Kunchukuttan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shantipriya", |
|
"middle": [], |
|
"last": "Parida", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ond\u0159ej", |
|
"middle": [], |
|
"last": "Bojar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 7th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian translation. In Proceedings of the 7th Workshop on Asian Trans- lation.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "KNU-HYUNDAI's NMT system for scientific paper and patent tasks on WAT 2019", |
|
"authors": [ |
|
{ |
|
"first": "Cheoneum", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Young-Jun", |
|
"middle": [], |
|
"last": "Jung", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kihoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geonyeong", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jae-Won", |
|
"middle": [], |
|
"last": "Jeon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seongmin", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junseok", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Changki", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 6th Workshop on Asian Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cheoneum Park, Young-Jun Jung, Kihoon Kim, Geonyeong Kim, Jae-Won Jeon, Seongmin Lee, Junseok Kim, and Changki Lee. 2019. KNU- HYUNDAI's NMT system for scientific paper and patent tasks on WAT 2019. In Proceedings of the 6th Workshop on Asian Translation.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Korean-to-Chinese machine translation using Chinese character as pivot clue", |
|
"authors": [ |
|
{ |
|
"first": "Jeonghyeok", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hai", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "33rd Pacific Asia Conference on Language, Information and Computation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeonghyeok Park and Hai Zhao. 2019. Korean-to- Chinese machine translation using Chinese charac- ter as pivot clue. In 33rd Pacific Asia Conference on Language, Information and Computation.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Neural machine translation of rare words with subword units", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Korean language in culture and society", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ho-Min Sohn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ho-Min Sohn. 2006. Korean language in culture and society. University of Hawaii Press.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "\u0141ukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Don't just scratch the surface: Enhancing word representations for Korean with Hanja", |
|
"authors": [ |
|
{ |
|
"first": "Taeuk", |
|
"middle": [], |
|
"last": "Kang Min Yoo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sang-Goo", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kang Min Yoo, Taeuk Kim, and Sang-goo Lee. 2019. Don't just scratch the surface: Enhancing word rep- resentations for Korean with Hanja. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Chinese-Japanese unsupervised neural machine translation using sub-character level information", |
|
"authors": [ |
|
{ |
|
"first": "Longtu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mamoru", |
|
"middle": [], |
|
"last": "Komachi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "33rd Pacific Asia Conference on Language, Information and Computation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Longtu Zhang and Mamoru Komachi. 2019. Chinese- Japanese unsupervised neural machine translation using sub-character level information. In 33rd Pa- cific Asia Conference on Language, Information and Computation.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Example of conversion of SK words in Hangul into Hanja and Japanese translation into Kanji.", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Korean sentence and sentence in which SK words are converted into Hanja, together with their Japanese and English translations.", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "Statistics of parallel data after each pre-processing.", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"text": "Statistics on vocabulary overlap between Korean and Japanese per-domain training data. The tokens and types are the numbers of overlap words and their types, and percentage is the percentage of their numbers to all data.", |
|
"content": "<table><tr><td>Model</td><td/><td>Ch</td><td>El</td><td>Me</td><td>Ph</td></tr><tr><td>Baseline</td><td>Tokens / Percentage Types / Percentage</td><td>270,406 / 3.9 5,681 /31.0</td><td>86,262 / 1.1 5,070 / 29.3</td><td>63,837 / 0.6 4,202 / 22.8</td><td>93,516 / 1.1 5,442/ 29.6</td></tr><tr><td>Hanja-conversion</td><td colspan=\"5\">Tokens / Percentage 1,697,758 / 24.2 1,525,433 / 20.1 1,678,554 / 17.7 1,530,997 / 19.0 Types / Percentage 10,803 / 49.5 9,010 / 45.0 8,449 / 39.3 10,165 / 46.8</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"text": "The test-n2 and test-n3 data size of each domain, which are predicted by the domain prediction models (Section 4). Korean BERT and Japanese BERT are the models used to train the domain prediction models.41\u00b10.11 71.84\u00b10.18 72.46\u00b10.09 73.42\u00b10.25 45.12\u00b10.50 Hanja-conversion 67.86\u00b10.14 63.70\u00b10.24 71.96\u00b10.23 56.68\u00b10.43 44.60\u00b10.44 Hanja-loss 68.47\u00b10.07 71.96\u00b10.14 72.60\u00b10.07 73.55\u00b10.22 44.85\u00b10.50", |
|
"content": "<table><tr><td>Model</td><td>dev</td><td>test-n</td><td>test-n1</td><td>test-n2</td><td>test-n3</td></tr><tr><td>Baseline</td><td>68.</td><td/><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"text": "BLEU scores of each single model. These BLEU scores are the average of the four models. The Hanjaloss model achieves the highest scores in the test-n, test-n1, and test-n2 data.", |
|
"content": "<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Embedding dimension</td><td>512</td></tr><tr><td>Attention heads</td><td>8</td></tr><tr><td>Layers</td><td>6</td></tr><tr><td>Optimizer</td><td>Adam</td></tr><tr><td>Adam betas</td><td>0.9, 0.98</td></tr><tr><td>Learning rate</td><td>0.0005</td></tr><tr><td>Dropout</td><td>0.1</td></tr><tr><td>Label smoothing</td><td>0.1</td></tr><tr><td>Max tokens</td><td>4,098</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"text": "Hyperparameters.", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"text": "-73.40 / 0.9504 73.76 / 0.9495 74.70 / 0.9543 53.70 / 0.9066 Hanja-loss 69.40 / -73.40 / 0.9504 73.81 / 0.9495 74.67 / 0.9544 53.73 / 0.9056 Korean BERT Baseline 69.20 / -73.39 / 0.9503 73.85 / 0.9494 74.66 / 0.9540 53.24 / 0.9063 Hanja-loss 69.26 / -73.38 / 0.9505 73.90 / 0.9495 74.61 / 0.9545 53.24 / 0.9060", |
|
"content": "<table><tr><td>presents the results of human evalua-</td></tr><tr><td>tion. These figures are adequacy scores evaluated</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF11": { |
|
"html": null, |
|
"text": "BLEU/RIBES scores of each ensemble of four models. The bottom two rows are the scores of the ensemble models retrained by each domain-specific data. Korean BERT and Japanese BERT represent the experimental results using the domain prediction models.", |
|
"content": "<table><tr><td/><td/><td>Ch</td><td/><td>El</td><td/><td>Me</td><td>Ph</td></tr><tr><td/><td>Model</td><td colspan=\"2\">dev test-n1</td><td colspan=\"2\">dev test-n1</td><td>dev test-n1</td><td>dev test-n1</td></tr><tr><td>Domain adaptation</td><td colspan=\"2\">Baseline Hanja-loss 70.08 69.57</td><td colspan=\"2\">73.92 68.45 72.67 68.26</td><td colspan=\"2\">75.83 69.16 76.15 69.10</td><td>70.17 71.53 70.25 71.41</td><td>76.34 76.45</td></tr><tr><td/><td>Baseline</td><td>63.29</td><td colspan=\"2\">66.67 62.68</td><td colspan=\"2\">69.14 65.23</td><td>65.83 64.53</td><td>68.93</td></tr><tr><td/><td colspan=\"2\">Hanja-loss 63.23</td><td colspan=\"2\">66.78 62.20</td><td colspan=\"2\">69.15 65.29</td><td>66.06 65.27</td><td>70.12</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"text": "The per-domain dev and test-n1 BLEU scores of each domain adaptation model and each single model trained by per-domain training data.", |
|
"content": "<table><tr><td>Model</td><td>Adequacy</td></tr><tr><td>Baseline</td><td>4.71</td></tr><tr><td>Hanja-loss</td><td>4.70</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF13": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Human evaluation of each domain adaptation</td></tr><tr><td>ensemble models in test-n data. We use Japanese BERT</td></tr><tr><td>for domain prediction model.</td></tr></table>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF14": { |
|
"html": null, |
|
"text": "A successful output example of the Hanja-loss model.Reference\u4e00\u3064\u306e \u614b\u69d8 \u3067 \u306f \u3001\uff32 \uff11 \u53ca\u3073 \uff32 \uff12 \u306f \uff11\u3064 \u4ee5\u4e0a \u306e \u91cd\u6c34\u7d20 \u539f\u5b50 \u3092 \u542b\u3080 \u3002 EnglishIn one form, R1 and R2 include one or more deuterium atoms. Source\ud558\ub098 \uc758 \uc2e4\uc2dc \uc591\ud0dc \uc5d0\uc11c , R 1 \ubc0f R 2 \ub294 1 \uac1c \uc774\uc0c1 \uc758 \uc911\uc218\uc18c \uc6d0\uc790 \ub97c \ud3ec\ud568 \ud55c\ub2e4 . Source (Hanja) \ud558\ub098 \uc758 \u5b9f\u65bd \u69d8\u614b\uc5d0\uc11c , R 1 \ubc0f R 2 \ub294 1 \u500b \u4ee5\u4e0a \uc758 \u91cd\u6c34\u7d20 \u539f\u5b50 \ub97c \u5305\u542b \ud55c\ub2e4 . Baseline \u4e00 \u5b9f\u65bd \u614b\u69d8 \u3067 \u306f \u3001\uff32 1 \u304a\u3088\u3073 \uff32 2 \u306f \uff11\u3064 \u4ee5\u4e0a \u306e \u91cd\u6c34\u7d20 \u539f\u5b50 \u3092 \u542b\u3080 \u3002 Hanja-loss \u4e00 \u5b9f\u65bd \u614b\u69d8 \u3067 \u306f \u3001\uff32 1 \u304a\u3088\u3073 \uff32 2 \u306f \u3001\uff11 \u500b \u4ee5\u4e0a \u306e \u91cd\u6c34\u7d20 \u539f\u5b50 \u3092 \u542b\u3080 \u3002", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF15": { |
|
"html": null, |
|
"text": "An unsuccessful output example of the Hanja-loss model.", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |