ACL-OCL / Base_JSON /prefixC /json /calcs /2020.calcs-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:50.271283Z"
},
"title": "Towards an Efficient Code-Mixed Grapheme-to-Phoneme Conversion in an Agglutinative Language: A Case Study on To-Korean Transliteration",
"authors": [
{
"first": "Ik",
"middle": [],
"last": "Cho",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {}
},
"email": ""
},
{
"first": "Min",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Nam",
"middle": [
"Soo"
],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Code-mixed grapheme-to-phoneme (G2P) conversion is a crucial issue for modern speech recognition and synthesis task, but has been seldom investigated in sentence-level in literature. In this study, we construct a system that performs precise and efficient multi-stage code-mixed G2P conversion, for a less studied agglutinative language, Korean. The proposed system undertakes a sentence-level transliteration that is effective in the accurate processing of Korean text. We formulate the underlying philosophy that supports our approach and demonstrate how it fits with the contemporary document.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Code-mixed grapheme-to-phoneme (G2P) conversion is a crucial issue for modern speech recognition and synthesis task, but has been seldom investigated in sentence-level in literature. In this study, we construct a system that performs precise and efficient multi-stage code-mixed G2P conversion, for a less studied agglutinative language, Korean. The proposed system undertakes a sentence-level transliteration that is effective in the accurate processing of Korean text. We formulate the underlying philosophy that supports our approach and demonstrate how it fits with the contemporary document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Grapheme-to-phoneme (G2P) conversion is an essential process for speech recognition and synthesis. It converts textual information called grapheme into phonetic information called phoneme. The graphemes, represented by symbols that let the language users pronounce, are not real audio data, nor do not have a necessary correspondence with the genuine sound. For example, 'apple' sounds more like aepl, while 'America' sounds like 9m\u00e9rik9. This process implies that the character 'a' does not have a direct correspondence with the sound 'ae' or '9'; instead, the appropriate symbol to transcribe each pronunciation might have been 'a'. This is influenced by that the English alphabet is a segmental script, but other writing systems do not necessarily guarantee greater correspondence. For example, in the case of logograms such as Chinese characters, there is little relationship between the composition of the character (bushu) and the pronunciation of the symbol (Figure 1, top) . In a little different viewpoint notwithstanding, Hangul representation of Korean is a featural writing system (Daniels and Bright, 1996) in which each sub-characters of morphosyllabic blocks corresponds to a phonetic property ( Figure 1 , bottom) (Kim-Renaud, 1997) . For instance, in a syllable khak placed at the right end of the bottom of Figure 1 , the three clock-wisely arranged characters kh, a, and k, which sound khiukh (among 19 candidates), ah (among 21 candidates), and kiyek (among 27 candidates), refers to the first, the second and the third sound of the given character, respectively (Cho et al., 2019) . This is a unique feature of the Korean writing system, which distinguishes Hangul from Chinese characters that do not have a direct relationship with syllable pronunciations. Also, Hangul is more delicately decomposed compared to mora-level Japanese Kana. Due to the above characteristics, the process of transforming grapheme in Korean to phoneme is widely performed by using the Korean alphabet itself (Jeon et al., 1998; Kim et al., 2002) , that is, the Hangul sub-character Jamo, unlike cases such as Chinese pinyin that borrows the English alphabet (Figure 1, top) . For this reason, even though the widely used Korean G2P sometimes uses English expressions (Cho, 2017) , the full phoneme sequence is primarily written in Hangul Jamo, to reflect the Korean pronunciation system. This property, the grapheme and phoneme set sharing the same symbols, allows Korean G2P a phonological approach within the language itself. Currently, Korean G2P systems in use (Cho, 2017; Park, 2019) follow the pronunciation rules of the National Institute of Korean Language in principle, and we can confirm that the conventional modules perform well on a rulebased basis. However, in this study, we implement a preprocessing module for challenging code-mixed G2P, which regards co-existing Korean and non-Korean expressions (Shim, 1994) , considering the case where the basis cannot be found in the monolingual rule. In specific, we deal with the English alphabet and Chinese characters, and mainly on the former 1 , concerning that environment in which English is mixed with text often exist in modern scripts such as technical reports or scripts (Shim, 1994; Sitaram et al., 2019) .",
"cite_spans": [
{
"start": 1094,
"end": 1120,
"text": "(Daniels and Bright, 1996)",
"ref_id": "BIBREF4"
},
{
"start": 1232,
"end": 1250,
"text": "(Kim-Renaud, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 1585,
"end": 1603,
"text": "(Cho et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 2010,
"end": 2029,
"text": "(Jeon et al., 1998;",
"ref_id": "BIBREF6"
},
{
"start": 2030,
"end": 2047,
"text": "Kim et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 2269,
"end": 2280,
"text": "(Cho, 2017)",
"ref_id": "BIBREF3"
},
{
"start": 2567,
"end": 2578,
"text": "(Cho, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 2579,
"end": 2590,
"text": "Park, 2019)",
"ref_id": null
},
{
"start": 2917,
"end": 2929,
"text": "(Shim, 1994)",
"ref_id": "BIBREF16"
},
{
"start": 3241,
"end": 3253,
"text": "(Shim, 1994;",
"ref_id": "BIBREF16"
},
{
"start": 3254,
"end": 3275,
"text": "Sitaram et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 965,
"end": 981,
"text": "(Figure 1, top)",
"ref_id": "FIGREF0"
},
{
"start": 1212,
"end": 1221,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1327,
"end": 1335,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2160,
"end": 2175,
"text": "(Figure 1, top)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Due to the human language being arbitrary, there are limitations in obtaining phoneme sequences using only the rules in some cases. Firstly, because code-mixing is not restricted only to two languages (ko-en), that English letters co-existing with Chinese characters and numbers are also observable. Second, various acronyms with nondeterministic pronunciation exist and are frequently utilized (e.g., word2vec, G2P), usually not spoken in a codeswitched way. Finally, due to the agglutinative property of the Korean language, it is often vague to decide which phrase to transform within a sentence. Accordingly, we decided to fully utilize the information given by existing libraries and dictionaries to implement a sentence-level transliteration for Korean/English code-mixed G2P, taking into account the syntactic property of decomposed tokens. The contributions of this study and demonstration are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 Easily adjustable multi-stage system for a sentencelevel code-mixed Korean G2P; detecting foreign expressions and replacing them with Hangul terms",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 Suggesting morphological and phonological tricks that can handle the pronunciation of cumbersome non-Korean expressions",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The system and code is to be publicly available 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sentence-level transliteration may seem simple, but it is involved in all phonetics, phonology, and morphology. In other words, at least the background in the Korean writing system, morphological analysis, Korean-English codemixed writing is essential for implementing en-ko codeswitching G2P (Kim et al., 2002) . Phonetically, the Korean language is a language pronounced as a sequence of syllables, and the related phonemes locally correspond to graphemes represented by morpho-syllabic blocks (Kim-Renaud, 1997) . The grapheme consists of a block as a single character, and is decomposed to sub-characters of first to third sound; CV(C). They are spoken straightforwardly in singleton cases, but when two or more characters are contiguous, the pronunciation differs from that of the single one (Jeon et al., 1998) . A code-mixed sentence, in this paper, is a Korean utterance (mainly written in text), where the syntax follows the Korean grammar, but some content phrases (non-functional expressions) are replaced by non-Korean terms, including English, Chinese and some numbers ( Figure 2 ) (Shim, 1994) . These expressions are often not promising in pronunciation for users of the same language, and acronyms are often confusing to resolve even when the source language is known (e.g., LREC as el-rec, or AAAI as triple-A-I). Therefore, for G2P, the biggest problem that code-mixed sentences bring is the difficulty of applying a rule for generating a phoneme sequence for speech processing, especially speech synthesis (Chandu et al., 2017) . This, in turn, is directly related to the difficulty of transliteration (Sitaram et al., 2019) . Figure 2 : Given that the modern Korean writing system does not utilize Chinese characters, the sentence above is a code-mixed Korean sentence with Chinese characters (green), English words (blue), and numbers (yellow). The translation is: \"From the point of view of G2P, the biggest problem with code-mixed text is that it is difficult to apply the rules for generating existing phonetic sequences in the use of text for speech processing, especially for speech synthesis.\"",
"cite_spans": [
{
"start": 293,
"end": 311,
"text": "(Kim et al., 2002)",
"ref_id": "BIBREF9"
},
{
"start": 496,
"end": 514,
"text": "(Kim-Renaud, 1997)",
"ref_id": "BIBREF10"
},
{
"start": 797,
"end": 816,
"text": "(Jeon et al., 1998)",
"ref_id": "BIBREF6"
},
{
"start": 1095,
"end": 1107,
"text": "(Shim, 1994)",
"ref_id": "BIBREF16"
},
{
"start": 1525,
"end": 1546,
"text": "(Chandu et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 1621,
"end": 1643,
"text": "(Sitaram et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 1084,
"end": 1092,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1646,
"end": 1654,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2."
},
{
"text": "There has been a lot of work on word-level transliteration process and its evaluation (Kang and Kim, 2000; Oh and Choi, 2002; Oh and Choi, 2005; Oh et al., 2006) , but little on the sentence-level processings. Unlike in English, where each word consists of either source or target language that arbitrary word can be transliterated into the source language, In Korean code-mixed sentences, it is usual that the foreign expressions are augmented with the functional particles, in a truly code-mixed format in morphological level ( Figure 2 ). It looks like a pidgin language, but is fully comprehensible by native readers since the symbols are distinguished. It is assumed that many industrial units are applying various heuristics to handle them, but we could not find an established academic approach for this issue. Built on the preceding discussions on code-mixed sentences and transliteration, we provide a detailed description of our resolution afterward.",
"cite_spans": [
{
"start": 86,
"end": 106,
"text": "(Kang and Kim, 2000;",
"ref_id": "BIBREF7"
},
{
"start": 107,
"end": 125,
"text": "Oh and Choi, 2002;",
"ref_id": "BIBREF12"
},
{
"start": 126,
"end": 144,
"text": "Oh and Choi, 2005;",
"ref_id": "BIBREF13"
},
{
"start": 145,
"end": 161,
"text": "Oh et al., 2006)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 530,
"end": 538,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background",
"sec_num": "2."
},
{
"text": "As the main contribution of this paper, we will implement a sentence-level code-mixed G2P that operates efficiently. For this, we took two methods into account.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "(1) On training a transformation module that maps Korean/non-Korean code-mixed raw text directly to Korean phoneme sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "(2) Multi-stage method of primarily changing non-Korean vocabulary to Korean pronunciation in code-mixed text and applying separate G2P module",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "The method of (1) is very suitable for utilizing neural network-based training and the implementation of end-toend speech recognition/synthesis system, but usually, the number of Korean lexicons is significantly higher than the English vocabulary size. In other words, as long as Korean sentences are used for speech recognition or synthesis, a large amount of artificially made code-mixed sentences are required for reliable learning, of which the effectiveness is not guaranteed. In addition, it does not seem data-efficient in that Korean text dominant in the dataset may deter the enhancement of transliterating arbitrary foreign expressions. These issues can result in the degradation of G2P precision and the performance of recognition/synthesis. On the other hand, in (2), non-Korean expressions are transliterated into Korean primarily, and then rule-based precise Korean G2P is performed. For the latter part of the process, a well-used module already exists (Cho, 2017; Park, 2019) , so we can concentrate on performing the former task, the transliteration to Korean. In this study, we adopt (2), mainly enhancing the transliteration process by detecting English and other non-Korean expressions (including Chinese characters and numbers) in code-mixed sentences and transforming them into Korean pronunciation. The specific procedure using the method of (2) is as Following (Figure 3 ).",
"cite_spans": [
{
"start": 968,
"end": 979,
"text": "(Cho, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 980,
"end": 991,
"text": "Park, 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1385,
"end": 1394,
"text": "(Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "1. Detecting phrases with code-mixed expressions: First of all, in the result of merely splitting a sentence into white space, detect an eojeol (Korean term for a whitespace-split word) containing an English or non-Korean (Chinese characters, numbers) expressions. In this process, Unicode information is exploited. The tokenization is done basically by a morphological analyzer, and each eojeol is considered as a chunk of morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "2. Separating context: Subsequently, use the eojeols of interest as the target of transformation, except for the functional particles (if present). In this process, the outcome of the morphological analyzer above is adopted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "3. Hybrid transliteration: Finally, transliterate the detected English/non-Korean expressions into Korean pronunciation. It is viable to use a dictionary or train a neural network-based model, but we want to mix the two approaches. In more detail, one can collect a variety of English loanwords, and list them with the commonly used (lexicographical) Korean pronunciations, using it as a dictionary. After the primary rule-based transliteration, a trained transliteration system can be used for words that do not fall into the pre-defined categories. In this process, Chinese characters and numbers are all taken into account, along with the context that is present in the rest of eojeol. The tricks used here are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "\u2022 Trick 1. On Chinese characters: All Chinese characters are replaced with corresponding Hangul symbols, since such cases are Sino-Koreans which already have an established pronunciation. Here, a subsequent chunk of Chinese characters is tied and transformed together to reflect the possible change of pronunciation regarding word-initial rules. If the Chinese character and numbers/English alphabet come together, the Chinese characters are transformed first, followed by the transliteration of other parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "\u2022 Trick 2. On numbers: For lone case, the pronunciation may follow the corresponding Chinese character as default, and if not alone, the tokens nearby are taken into account. If a number is placed between English words, consider using the result of transliteration of English words into Korean (e.g., 2 = two > thu, 4 = four > pho). Even when a number is between the English alphabet and Chinese/Korean at the same time, the pronunciation may follow English, as in the case of 'number 3 kka-ci (till number 3)'. Otherwise, between Chinese characters, the number is read as in Trick 1. If the number between is followed by Korean Hangul, the cardinality, ordinality, or being Sino-Korean of the number is determined upon a convention, which might change the pronunciation. This follows the conventions of the Korean language, and can be modified based on the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "\u2022 Trick 3. On acronyms: Acronyms are easy to detect if written in capital letters, but people do not necessarily follow such the standard. Therefore, we added some tricks for the ones that are not in the dictionary. If they are all composed of consonants or have separate symbols between characters, each consonant is subsequently pronounced in Korean. However, if there is a corresponding English word, the dictionary output, or the result that is yielded by the trained system is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "In the above process, methods such as a recurrent neural network (RNN) or Transformer (Vaswani et al., 2017) may be used for machine learning approach through training. The method using training means to make seq2seq (Sutskever et al., 2014) model with alphabet input and Hangul output using a parallel corpus of English and transliterated English. However, it is not necessary to take a training-based approach to English words that are already in the dictionary. Thus, words in the codebook 3 produce a precise output in the form of look-up tables, and words not in the codebook are predicted by seq2seq models learned through parallel corpus (here it is the same as the codebook). This allows the model to learn pronunciations for words that are not in the dictionary, and possibly for acronyms, as many previous machine learning-based transliteration modules did (Karimi et al., 2011; Finch et al., 2016) . Translating English into Korean first in this way and then applying rule-based G2P allows the modeling of the entire G2P to be more robust to Korean pronunciation rules. We note here that though the codebook we adopt already incorporates a precise transformation of many words (about 37K), we need to train a system that can pronounce words that are not on the list. That is, we need to observe beyond the rules of how the arrangement of English consonants and vowels has determined Korean pronunciation. Once the Hangul characters are padded sub-character-level, or jamolevel, and compared with the English alphabet, the correspondence between the two is not consistent. Beyond the limitation of symbol representation, what makes this more challenging are 1) the different sound produced by the Korean consonants that come to the first and third sound, and 2) the sound change that takes place when the third sound meets the first sound of the next syllable. Moreover, 3) in English, one needs to observe the vowels where the consonant is located around, the vowel placement within the word, and what unique phonetic properties the various bigram/trigram characters have. Therefore, in the implementation of a non-rule-based transliteration system, the seq2seq approach is carried out to character level in English and sub-character-level in Korean. Moreover, in characterizing Korean, the first sound and the third sound, that are similarly the consonant, can be represented distinctly. Using this, with about 37K pairs of English word-Korean pronunciation pairs, we trained the (attention-based) RNN encoder-decoder (Cho et al., 2014; Luong et al., 2015) , under the consideration that the Transformer would be too large-scale for just a word-level transformation. The implementation detail is to be released along with the model and system.",
"cite_spans": [
{
"start": 86,
"end": 108,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 217,
"end": 241,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF18"
},
{
"start": 867,
"end": 888,
"text": "(Karimi et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 889,
"end": 908,
"text": "Finch et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 2530,
"end": 2548,
"text": "(Cho et al., 2014;",
"ref_id": "BIBREF1"
},
{
"start": 2549,
"end": 2568,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "3."
},
{
"text": "The concept of sentence-level code-mixed Korean G2P has been proposed in the previous section, and we aim to implement a fast and accurate code-mixed G2P that can be used for practical speech recognition/synthesis, that integrates other models in use. However, since standard transliteration studies have sought for character/word-level accuracy, mainly in word-level transformations, referring them might not be suitable for direct comparison with this work. Therefore, in this section, we will demonstrate the flexibility and utility of our approach with a concrete example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4."
},
{
"text": "For an efficient construction that divides and conquers the sub-modules, we leveraged various open-source libraries in our implementation. The sub-modules and corresponding libraries are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "\u2022 mixed g2p: Transforms a code-mixed sentence to the phoneme sequence. Consists of sentranslit and KoG2P/g2pK. Performs sentence-level transliteration of code-mixed sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "Consists of align particles, trans eojeol (eojeol-level transliteration), trans number, trans hanja, and trans latin. Undertakes transliteration only if the string contains non-Hangul expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "hgtk 6 : A software that recognizes, decomposes, and reconstructs Hangul/Jamo sequence. Also detects if the string contains Chinese characters or the Latin alphabet. \u2022 trans eojeol: Controls the operation of trans number, trans hanja, and trans latin, given the result of morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "-MeCab 7 : A statistic model-based Korean morphological analyzer that performs fast and accurate, which was first developed for the analysis of the Japanese language. Here, we utilize pythonmecab 8 for convenience, which is an easily accessible wrapper. \u2022 trans number: Reads the numbers in Chinese style (Korean pronunciation), in English (en-ko transliteration), or in Korean (ordinal, cardinal, or Sino-Korean).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "-Bases on the characteristics of the context tokens, incorporating various exceptional cases. \u2022 trans hanja: Reads Chinese characters in Korean pronunciation, considering the word-initial rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "hanja 9 : A library that translates Chinese characters into Korean syllables, also obeying wordinitial rules. This module is utilized in two parts of the system; at the very first of the sentence analysis and again in eojeol-level, to complement the possible fail of Chinese character recognition. \u2022 trans latin: Performs en-ko transliteration, with rule and learning hybrid approach. Figure 2 . Note that the code-mixed expressions in each eojeol are transliterated based on the scheme and tricks in Section 3.",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 394,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "transliteration 10 : Our utilized dictionary comes from the pre-built dataset 11 of this library, where the results of learning-based en-ko transliteration was previously published. We train a new system based on this, and this module can be replaced with whatever transliteration module that shows sufficient performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.1."
},
{
"text": "Our demonstration with the sample sentence in Figure 2 is suggested in Figure 4 . Since the G2P conversion is straightforwardly performed, we discuss here the sentence-level transliteration process. There are mainly three points that show how our system works. First, regarding Chinese code-mixed expressions, some in sole words and others mixed with Korean functional particles, our module (trans eojeol) detects the terms and translate them into Korean pronunciation via trans hanja, with the help of hanja library. Next, similar is done for English expressions such as code, mix, sequence and rule, possibly utilizing trans latin, where the dictionary and training are engaged in. Lastly, for a challenging term G2P, which may not be in the dictionary (and is at the first place decomposed by the morphological analyzer), the sub-modules above succeed to split them into G, 2, and P, transliterating each of them to Korean pronunciation ci, thu, and phi, given that g and p should be read as a single alphabet (due to being sole consonant) and also 2 is read in English concerning its surroundings. In this way, our module divides and conquers the challenging task and finally yields the desired output.",
"cite_spans": [],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 2",
"ref_id": null
},
{
"start": 71,
"end": 79,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Demonstration",
"sec_num": "4.2."
},
{
"text": "Though our work is mainly on a code-mixed G2P, the recently released library g2pK partially shares some features with ours; various functions are inserted regarding the pronunciation of English terms and numbers in Korean sentences. We concentrate more on reading numbers and acronyms in a code-mixed context, trying to make a rulelearning hybrid approach for en-ko transliteration. On the other hand, in g2pK, such functions are implemented as a utility, while G2P rules are main and quite thoroughly investigated. We claim that both systems are not mutually exclusive, and rather might be complementary to each other. Again, to be specific on the architecture, each of our submodules can be replaced with whatever the user wants as customization, without losing the additional flexibility of the user-generated dictionary. For instance, as suggested in transliteration library, one can define a new word list and accumulate the wanted result to it. Making up a look-up table can sometimes and inevitably be more efficient and accurate. Also, since MeCab was basically proposed for the analysis of the Japanese language, whose syntax a lot resembles Korean, one who wants to implement a similar module for Japanese code-mixed writings may benefit our system. The above factors support the scalability and generalizability of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3."
},
{
"text": "The code-mixed G2P implemented in this paper can be used for both research and industry. First of all, as emphasized, the application onto speech synthesis is very intuitive. Korean corpus has many sentences that consist only of Hangul characters, of course, but there may also be enough code-mixed expressions in modern text, and especially in chat dialogues, which is close to being synthesized. Therefore, if one can take advantage of this system well, it might be possible to promote plausible code-mixed pronunciation without regulating the generation of the script to only one kind of language. Without a doubt, this does not have a conflict with the option of not doing code-switching. That is, merely preserving the pronunciation of the source language is also recommended, if technically available. The multi-stage approach we present can, of course, generate bottlenecks. However, it is expected to have significant advantages over end-to-end learning, in other words, using code-mixed text for training speech synthesis systems. For instance, the English language, once used with Korean notation, hardly reflects the phonetic traits shared with other Korean alphabets. This is primarily because the structure of CV(C) is not clear in the English writing system as in Hangul. Also, since agglutinative language usually displays functional particles after nouns or verbs, a corpus configuration with insufficient English words does not guarantee the performance of end-to-end architecture. It is also challenging to ensure that doing so yields transliterated pronunciations that we pronounce in real life, nor better than the transliteration modules that concentrate on word-level seq2seq. Therefore, we believe that it is practically advantageous to detect non-Korean expressions first and use hybrid transformation with some tricks. The implementation of G2P for speech synthesis is a typical application, but besides, this algorithm can be exploited in sentence correction, corpus refinement, script construction/pronunciation guidelines, and translation service quality improvement. Also, this methodology is expected to apply not only to Chinese characters, English words, and numbers in Korean sentences, but also to sentences and code-mixed expressions in various agglutinative languages, especially the ones that require morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application",
"sec_num": "5."
},
{
"text": "In this paper, we constructed a stable and efficient g2p by presenting a hybrid transliteration method with rule and training for code-mixed Korean sentences. To this end, we detected words containing non-Korean expressions, separated a grammatical part of the word from the rest content via morphological analysis, and replaced the code-mixed expressions with transliterated ones. In this process, by using a statistical model-based morphological analyzer with fairly high performance, we performed non-Korean expression detection that is suitable for colloquial context, with a less computational burden. Also, by separating the grammatical part from the content part in this process, the actual part that needs to be converted in the code-mixed sentence is detected so that the expressions that contain English/Chinese/number can be smoothly converted into Korean pronunciation. Our subsequent studies aim to improve the accuracy of handling proper nouns in code-mixed text pre-processing by collecting more commercial expressions. Also, we plan to verify common pronunciation patterns through media/reallife examples, exploiting the neural network structure with external memory. As a result, research will be carried out to enable controllable to-Korean transliteration by allowing more user-specified stop-words to be reflected in the training of the systems and conversion process itself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Depending on the configuration and arrangement of Chinese characters, the duration of the syllable may change or a particular consonant may be inserted, but this is a task to handle in G2P after converting to a Hangul once and not a target here. Also, Japanese Kana is seldom used among Korean text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/warnikchow/translit2k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper, we interchangeably utilize codebook and dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/scarletcho/KoG2P 5 https://github.com/Kyubyong/g2pK 6 https://github.com/bluedisk/hangul-toolkit 7 https://bitbucket.org/eunjeon/mecab-ko-dic/src/master/ 8 https://github.com/jeongukjae/python-mecab 9 https://github.com/suminb/hanja",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/muik/transliteration 11 https://github.com/muik/transliteration/tree/master/data/source",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": ". Besides, we thank to the three anonymous reviewers for their helpful comments. After all, the authors appreciate all the contributors of the open source libraries which were essential for our project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Speech synthesis for mixed-language navigation instructions",
"authors": [
{
"first": "K",
"middle": [
"R"
],
"last": "Chandu",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Rallabandi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2017,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "57--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandu, K. R., Rallabandi, S. K., Sitaram, S., and Black, A. W. (2017). Speech synthesis for mixed-language navigation instructions. In INTERSPEECH, pages 57- 61.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Cho, K., Van Merri\u00ebnboer, B., Gulcehre, C., Bah- danau, D., Bougares, F., Schwenk, H., and Ben- gio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Investigating an effective character-level embedding in Korean sentence classification",
"authors": [
{
"first": "W",
"middle": [
"I"
],
"last": "Cho",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Kim",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Kim",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1905.13656"
]
},
"num": null,
"urls": [],
"raw_text": "Cho, W. I., Kim, S. M., and Kim, N. S. (2019). Investigating an effective character-level embedding in Korean sentence classification. arXiv preprint arXiv:1905.13656.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Korean grapheme-to-phoneme analyzer (kog2p)",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cho, Y. (2017). Korean grapheme-to-phoneme analyzer (kog2p). https://github.com/scarletcho/ KoG2P.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The world's writing systems",
"authors": [
{
"first": "P",
"middle": [
"T"
],
"last": "Daniels",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Bright",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniels, P. T. and Bright, W. (1996). The world's writing systems. Oxford University Press on Demand.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Target-bidirectional neural models for machine transliteration",
"authors": [
{
"first": "A",
"middle": [],
"last": "Finch",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the sixth named entity workshop",
"volume": "",
"issue": "",
"pages": "78--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finch, A., Liu, L., Wang, X., and Sumita, E. (2016). Target-bidirectional neural models for machine translit- eration. In Proceedings of the sixth named entity work- shop, pages 78-82.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic generation of Korean pronunciation variants by multistage applications of phonological rules",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hwang",
"suffix": ""
}
],
"year": 1998,
"venue": "Fifth International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeon, J., Cha, S., Chung, M., Park, J., and Hwang, K. (1998). Automatic generation of Korean pronunciation variants by multistage applications of phonological rules. In Fifth International Conference on Spoken Language Processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "English-to-Korean transliteration using multiple unbounded overlapping phoneme chunks",
"authors": [
{
"first": "I.-H",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "418--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kang, I.-H. and Kim, G. (2000). English-to-Korean transliteration using multiple unbounded overlapping phoneme chunks. In Proceedings of the 18th conference on Computational linguistics-Volume 1, pages 418-424. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Machine transliteration survey",
"authors": [
{
"first": "S",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Scholer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Turpin",
"suffix": ""
}
],
"year": 2011,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "43",
"issue": "3",
"pages": "1--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karimi, S., Scholer, F., and Turpin, A. (2011). Machine transliteration survey. ACM Computing Surveys (CSUR), 43(3):1-46.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Morphemebased grapheme to phoneme conversion using phonetic patterns and morphophonemic connectivity information",
"authors": [
{
"first": "B",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "G",
"middle": [
"G"
],
"last": "Lee",
"suffix": ""
},
{
"first": "J.-H",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "1",
"issue": "1",
"pages": "65--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, B., Lee, G. G., and Lee, J.-H. (2002). Morpheme- based grapheme to phoneme conversion using phonetic patterns and morphophonemic connectivity information. ACM Transactions on Asian Language Information Pro- cessing (TALIP), 1(1):65-82.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The phonological analysis reflected in the Korean writing system",
"authors": [
{
"first": "Y.-K",
"middle": [],
"last": "Kim-Renaud",
"suffix": ""
}
],
"year": 1997,
"venue": "The Korean alphabet: its history and structure",
"volume": "",
"issue": "",
"pages": "161--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim-Renaud, Y.-K. (1997). The phonological analysis re- flected in the Korean writing system. The Korean alpha- bet: its history and structure, pages 161-192.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "M.-T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Luong, M.-T., Pham, H., and Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An English-Korean transliteration model using pronunciation and contextual rules",
"authors": [
{
"first": "J.-H",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "K.-S",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oh, J.-H. and Choi, K.-S. (2002). An English-Korean transliteration model using pronunciation and contextual rules. In Proceedings of the 19th international confer- ence on Computational linguistics-Volume 1, pages 1-7. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An ensemble of grapheme and phoneme for machine transliteration",
"authors": [
{
"first": "J.-H",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "K.-S",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2005,
"venue": "International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "450--461",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oh, J.-H. and Choi, K.-S. (2005). An ensemble of grapheme and phoneme for machine transliteration. In International Conference on Natural Language Process- ing, pages 450-461. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A comparison of different machine transliteration models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Oh",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "27",
"issue": "",
"pages": "119--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oh, J., Choi, K., and Isahara, H. (2006). A comparison of different machine transliteration models. Journal of Artificial Intelligence Research, 27:119-151.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Englishized Korean: Structure, status, and attitudes",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Shim",
"suffix": ""
}
],
"year": 1994,
"venue": "World Englishes",
"volume": "13",
"issue": "2",
"pages": "225--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shim, R. J. (1994). Englishized Korean: Structure, status, and attitudes. World Englishes, 13(2):225-244.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A survey of code-switched speech and language processing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Chandu",
"suffix": ""
},
{
"first": "S",
"middle": [
"K"
],
"last": "Rallabandi",
"suffix": ""
},
{
"first": "A",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.00784"
]
},
"num": null,
"urls": [],
"raw_text": "Sitaram, S., Chandu, K. R., Rallabandi, S. K., and Black, A. W. (2019). A survey of code-switched speech and language processing. arXiv preprint arXiv:1904.00784.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "I",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104- 3112.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "A",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "A",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Infor- mation Processing Systems, pages 5998-6008.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Comparing the Chinese language written with Hanzi (along with pinyin, top) and the Korean language written with Hangul, the featural writing system (along with Yale romanization, bottom).",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "A brief diagram of the proposed code-mixed transliteration system. The translation is: \"Why don't we write a NeurIPS or ICML paper this year?\", and the non-Korean terms NeurIPS and ICML are identified and transformed.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Demonstration for the sample sentence in",
"type_str": "figure",
"num": null,
"uris": null
}
}
}
}