Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:06:27.197303Z"
},
"title": "Code-Switched Language Models Using Neural Based Synthetic Data from Parallel Sentences",
"authors": [
{
"first": "Genta",
"middle": [],
"last": "Indra Winata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": {
"addrLine": "Clear Water Bay",
"settlement": "Hong Kong"
}
},
"email": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Madotto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": {
"addrLine": "Clear Water Bay",
"settlement": "Hong Kong"
}
},
"email": "[email protected]"
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": {
"addrLine": "Clear Water Bay",
"settlement": "Hong Kong"
}
},
"email": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong University of Science and Technology",
"location": {
"addrLine": "Clear Water Bay",
"settlement": "Hong Kong"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Training code-switched language models is difficult due to lack of data and complexity in the grammatical structure. Linguistic constraint theories have been used for decades to generate artificial code-switching sentences to cope with this issue. However, this require external word alignments or constituency parsers that create erroneous results on distant languages. We propose a sequence-to-sequence model using a copy mechanism to generate code-switching data by leveraging parallel monolingual translations from a limited source of code-switching data. The model learns how to combine words from parallel sentences and identifies when to switch one language to the other. Moreover, it captures code-switching constraints by attending and aligning the words in inputs, without requiring any external knowledge. Based on experimental results, the language model trained with the generated sentences achieves state-of-theart performance and improves end-to-end automatic speech recognition.",
"pdf_parse": {
"paper_id": "K19-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "Training code-switched language models is difficult due to lack of data and complexity in the grammatical structure. Linguistic constraint theories have been used for decades to generate artificial code-switching sentences to cope with this issue. However, this require external word alignments or constituency parsers that create erroneous results on distant languages. We propose a sequence-to-sequence model using a copy mechanism to generate code-switching data by leveraging parallel monolingual translations from a limited source of code-switching data. The model learns how to combine words from parallel sentences and identifies when to switch one language to the other. Moreover, it captures code-switching constraints by attending and aligning the words in inputs, without requiring any external knowledge. Based on experimental results, the language model trained with the generated sentences achieves state-of-theart performance and improves end-to-end automatic speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Code-switching is a common linguistic phenomenon in multilingual communities, in which a person begins speaking or writing in one language and then switches to another in the same sentence. 1 It is motivated in response to social factors as a way of communicating in a multicultural society. In its practice, code-switching varies due to the traditions, beliefs, and normative values in the respective communities. Linguists have studied the code-switching phenomenon and proposed a number of linguistic theories (Poplack, 1978; Pfaff, 1979; Poplack, 1980; Belazi et al., 1994) . Code-switching is not produced indiscriminately, but follows syntactic constraints. Many linguists have formulated various constraints to define a general rule for code-switching (Poplack, 1978 (Poplack, , 1980 Belazi et al., 1994) . However, these constraints cannot be postulated as a universal rule for all code-switching scenarios, especially for languages that are syntactically divergent (Berk-Seligson, 1986) , such as English and Mandarin since they have word alignments with an inverted order.",
"cite_spans": [
{
"start": 513,
"end": 528,
"text": "(Poplack, 1978;",
"ref_id": "BIBREF17"
},
{
"start": 529,
"end": 541,
"text": "Pfaff, 1979;",
"ref_id": "BIBREF16"
},
{
"start": 542,
"end": 556,
"text": "Poplack, 1980;",
"ref_id": "BIBREF18"
},
{
"start": 557,
"end": 577,
"text": "Belazi et al., 1994)",
"ref_id": "BIBREF4"
},
{
"start": 759,
"end": 773,
"text": "(Poplack, 1978",
"ref_id": "BIBREF17"
},
{
"start": 774,
"end": 790,
"text": "(Poplack, , 1980",
"ref_id": "BIBREF18"
},
{
"start": 791,
"end": 811,
"text": "Belazi et al., 1994)",
"ref_id": "BIBREF4"
},
{
"start": 974,
"end": 995,
"text": "(Berk-Seligson, 1986)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Building a language model (LM) and an automatic speech recognition (ASR) system that can handle intra-sentential code-switching is known to be a difficult research challenge. The main reason lies in the unpredictability of code-switching points in an utterance and data scarcity. Creating a large-scale code-switching dataset is also very expensive. Therefore, code-switching data generation methods to augment existing datasets are a useful workaround.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing methods that apply equivalence constraint theory to generate code-switching sentences (Li and Fung, 2012; Pratapa et al., 2018) may suffer performance issues as they receive erroneous results from the word aligner and the partof-speech (POS) tagger. Thus, this approach is not reliable and effective. Recently, Garg et al. (2018) proposed a SeqGAN-based model to generate code-switching sentences. Indeed, the model learns how to generate new synthetic sentences. However, the distribution of the generated sentences is very different from real code-switching data, which leads to underperforming results.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "(Li and Fung, 2012;",
"ref_id": "BIBREF9"
},
{
"start": 115,
"end": 136,
"text": "Pratapa et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 320,
"end": 338,
"text": "Garg et al. (2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To overcome the challenges in the existing works, we introduce a neural-based codeswitching data generator model using pointergenerator networks (Pointer-Gen) (See et al., 2017) to learn code-switching constraints from a limited source of code-switching data and leverage their translations in both languages. Intu-itively, the copy mechanism can be formulated as an end-to-end solution to copy words from parallel monolingual sentences by aligning and reordering the word positions to form a grammatical codeswitching sentence. This method solves the two issues in the existing works by removing the dependence on the aligner or tagger, and generating new sentences with a similar distribution to the original dataset. Interestingly, this method can learn the alignment effectively without a word aligner or tagger. As an additional advantage, we demonstrate its interpretability by showing the attention weights learned by the model that represent the code-switching constraints. Our contributions are summarized as follows:",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(See et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose a language-agnostic method to generate code-switching sentences using a pointer-generator network (See et al., 2017) that learns when to switch and copy words from parallel sentences, without using external word alignments or constituency parsers. By using the generated data in the language model training, we achieve the state-of-theart performance in perplexity and also improve the end-to-end ASR on an English-Mandarin code-switching dataset.",
"cite_spans": [
{
"start": 111,
"end": 129,
"text": "(See et al., 2017)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We present an implementation applying the equivalence constraint theory to languages that have significantly different grammar structures, such as English and Mandarin, for sentence generation. We also show the effectiveness of our neural-based approach in generating new code-switching sentences compared to the equivalence constraint and Seq-GAN (Garg et al., 2018) .",
"cite_spans": [
{
"start": 350,
"end": 369,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We thoroughly analyze our generation results and further examine how our model identifies code-switching points to show its interpretability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we describe our proposed model to generate code-switching sentences using a pointer-generator network. Then, we briefly list the assumptions of the equivalence constraint (EC) theory, and explain our application of EC theory for sentence generation. We call the dominant language the matrix language (L 1 ) and the inserted language the embedded language (L 2 ), following the definitions from Myers-Scotton (2001) . Let us define Q = {Q 1 , ..., Q T } as a set of L 1 sentences and E = {E 1 , ..., E T } as a set of L 2 sentences with T number of sentences, where each Q t = {q 1,t , ..., q m,t } and E t = {e 1,t , ..., e n,t } are sentences with m and n words. E is the corresponding parallel sentences of Q.",
"cite_spans": [
{
"start": 411,
"end": 431,
"text": "Myers-Scotton (2001)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Code-Switching Data",
"sec_num": "2"
},
{
"text": "Initially, Pointer-Gen was proposed to learn when to copy words directly from the input to the output in text summarization, and they have since been successfully applied to other natural language processing tasks, such as comment generation . The Pointer-Gen leverages the information from the input to ensure high-quality generation, especially when the output sequence consists of elements from the input sequence, such as code-switching sequences. We propose to use Pointer-Gen by leveraging parallel monolingual sentences to generate codeswitching sentences. The approach is depicted in Figure 1 . The pointer-generator model is trained from concatenated sequences of parallel sentences (Q,E) to generate code-switching sentences, constrained by code-switching texts. The words of the input are fed into the encoder. We use a bidirectional long short-term memory (LSTM), which, produces hidden state h t in each step t. The decoder is a unidirectional LSTM receiving the word embedding of the previous word. For each decoding step, a generation probability p gen \u2208 [0,1] is calculated, which weights the probability of generating words from the vocabulary, and copying words from the source text. p gen is a soft gating probability to decide whether to generate the next token from the decoder or to copy the word from the input instead. The attention distribution a t is a standard attention with general scoring (Luong et al., 2015) . It considers all encoder hidden states to derive the context vector. The vocabulary distribution P voc (w) is calculated by concatenating the decoder state s t and the context vector h * t :",
"cite_spans": [
{
"start": 1419,
"end": 1439,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 592,
"end": 600,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pointer-Gen",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p gen = \u03c3(w T h * h * t + w T s s t + w T x x t + b ptr ),",
"eq_num": "(1)"
}
],
"section": "Pointer-Gen",
"sec_num": "2.1"
},
{
"text": "where w h * , w s , and w x are trainable parameters and b ptr is the scalar bias. The vocabulary distribution P voc (w) and the attention distribution a t are weighted and summed to obtain the final distribution P (w), which is calculated as follows: We use a beam search to select the N -best codeswitching sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Gen",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w) = p gen P voc (w) + (1 \u2212 p gen ) i:w i =w a t i .",
"eq_num": "(2"
}
],
"section": "Pointer-Gen",
"sec_num": "2.1"
},
{
"text": "\u8fd9 \u4e2a \u5176 \u5b9e \u662f \u5c5e \u4e8e \u7b80 \u4f53 \u4e2d \u2f42 this \u662f \u5176 \u5b9e belonged to simplified chinese \u8fd9 \u4e2a \u5176 \u5b9e \u662f belonged to \u7b80 \u4f53 \u4e2d \u2f42 \u8fd9 \u4e2a \u5176 \u5b9e \u662f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Gen",
"sec_num": "2.1"
},
{
"text": "Studies on the EC (Poplack, 1980 (Poplack, , 2013 show that code-switching only occurs where it does not violate the syntactic rules of either language. An example of a English-Mandarin mixed-language sentence generation is shown in Figure 2 , where EC theory does not allow the word \"\u5176\u5b9e\" to come after \"\u662f\" in Chinese, or the word \"is\" to come after \"actually\". Pratapa et al. (2018) apply the EC in English-Spanish language modeling with a strong assumption. We are working with English and Mandarin, which have distinctive grammar structures (e.g., part-of-speech tags), so applying a constituency parser would give us erroneous results. Thus, we simplify sentences into a linear structure, and we allow lexical substitution on non-crossing alignments between parallel sentences. Alignments between an L 1 sentence Q t and an L 2 sentence E t comprise a source vector with in-",
"cite_spans": [
{
"start": 18,
"end": 32,
"text": "(Poplack, 1980",
"ref_id": "BIBREF18"
},
{
"start": 33,
"end": 49,
"text": "(Poplack, , 2013",
"ref_id": "BIBREF19"
},
{
"start": 362,
"end": 383,
"text": "Pratapa et al. (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 233,
"end": 241,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "dices u t = {a 1 , a 2 , ..., a m } \u2208 W m that has a cor- responding target vector v t = {b 1 , b 2 , ..., b m } \u2208 W m ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "where u is a sorted vector of indices in an ascending order. The alignment between a i and b i does not satisfy the constraint if there exists a pair of a j and b j , where (a i < a j , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "b i > b j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "or (a i > a j , and b i < b j ). If the switch occurs at this point, it changes the grammatical order in both languages; thus, this switch is not acceptable. During the generation step, we allow any switches that do not violate the constraint. We propose to generate synthetic code-switching data by the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "1. Align the L 1 sentences Q and L 2 sentences E using fast_align 2 (Dyer et al., 2013) . We use the mapping from the L 1 sentences to the L 2 sentences.",
"cite_spans": [
{
"start": 68,
"end": 87,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "2. Permute alignments from step (1) and use them to generate new sequences by replacing the phrase in the L 1 sentence with the aligned phrase in the L 2 sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "switching ASR system. The end-to-end ASR model accepts a spectrogram as the input, instead of log-Mel filterbank features (Zhou et al., 2018) , and predicts characters. It consists of N layers of an encoder and decoder. Convolutional layers are added to learn a universal audio representation and generate input embedding. We employ multi-head attention to allow the model to jointly attend to information from different representation subspaces at a different position. For proficiency in recognizing individual languages, we train a multilingual ASR system trained from monolingual speech. The idea is to use it as a pretrained model and transfer the information while training the model with codeswitching speech. This is an effective method to initialize the parameters of low-resource ASR such as code-switching. The catastrophic forgetting issue arises when we train one language after the other. Therefore, we solve the issue by applying a multi-task learning strategy. We jointly train speech from both languages by taking the same number of samples for each language in every batch to keep the information of both tasks.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Zhou et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "In the inference time, we use beam search, selecting the best sub-sequence scored using the softmax probability of the characters. We define P (Y ) as the probability of the sentence. We incorporate language model probability p lm (Y ) to select more natural code-switching sequences from generation candidates. A word count is added to avoid generating very short sentences. P (Y ) is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "P (Y ) = \u03b1P trans (Y |X) + \u03b2p lm (Y ) + \u03b3 wc(Y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "(3) where \u03b1 is the parameter to control the decoding probability from the probability of characters from the decoder P trans (Y |X), \u03b2 is the parameter to control the language model probability p lm (Y ), and \u03b3 is the parameter to control the effect of the word count wc(Y ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "from Winata et al. (2018a) . The details are depicted in Table 1 . We tokenize words using the Stanford NLP toolkit (Manning et al., 2014) . For monolingual speech datasets, we use HKUST (Liu et al., 2006) , comprising spontaneous Mandarin Chinese telephone speech recordings, and Common Voice, an open-accented English dataset collected by Mozilla. 3 We split Chinese words into characters to avoid word boundary issues, similarly to Garg et al. (2018) . We generate L 1 sentences and L 2 sentences by translating the training set of SEAME Phase II into English and Chinese using the Google NMT system (To enable reproduction of the results, we release the translated data). 4 Then, we use them to generate 270,531 new pieces of code-switching data, which is thrice the number of the training set. Table 2 shows the statistics of the new generated sentences. To calculate the complexity of our real and generated code-switching corpora, we use the following measures:",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "Winata et al. (2018a)",
"ref_id": "BIBREF24"
},
{
"start": 116,
"end": 138,
"text": "(Manning et al., 2014)",
"ref_id": "BIBREF13"
},
{
"start": 187,
"end": 205,
"text": "(Liu et al., 2006)",
"ref_id": "BIBREF11"
},
{
"start": 350,
"end": 351,
"text": "3",
"ref_id": null
},
{
"start": 435,
"end": 453,
"text": "Garg et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 676,
"end": 677,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 1",
"ref_id": null
},
{
"start": 799,
"end": 806,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "Switch-Point Fraction (SPF)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "This measure calculates the number of switch-points in a sentence divided by the total number of word boundaries (Pratapa et al., 2018) . We define \"switchpoint\" as a point within the sentence at which the languages of words on either side are different.",
"cite_spans": [
{
"start": 113,
"end": 135,
"text": "(Pratapa et al., 2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "Code Mixing Index (CMI) This measure counts the number of switches in a corpus (Gamb\u00e4ck and Das, 2014). At the utterance level, it can be computed by finding the most frequent language in the utterance and then counting the frequency of the words belonging to all other languages present. We compute this metric at the corpus level by averaging the values for all the sentences in a corpus. The computation is shown as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C u (x) = N (x) \u2212 max( i \u2208 {t i (x)}) + P (x) N (x) ,",
"eq_num": "(4)"
}
],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "where N (x) is the number of tokens of utterance x, t i is the tokens in language i , and P (x) is the number of code-switching points in utterance x. We compute this metric at the corpus-level by averaging the values for all sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Equivalence Constraint",
"sec_num": "2.2"
},
{
"text": "We generate code-switching sentences using three methods: EC theory, SeqGAN (Garg et al., 2018) , and Pointer-Gen. To find the best way of leveraging the generated data, we compare different training strategies as follows:",
"cite_spans": [
{
"start": 76,
"end": 95,
"text": "(Garg et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LM Training Strategy Comparison",
"sec_num": "4.2"
},
{
"text": "(1) rCS, (2a) EC, (2b) SeqGAN, (2c) Pointer-Gen, (3a) EC & rCS, (3b) SeqGAN & rCS, (3c) Pointer-Gen & rCS (4a) EC \u2192 rCS (4b) SeqGAN \u2192 rCS, (4c) Pointer-Gen \u2192 rCS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Training Strategy Comparison",
"sec_num": "4.2"
},
{
"text": "(1) is the baseline, training with real codeswitching data. (2a-2c) train with only augmented data. (3a-3c) train with the concatenation of augmented data with rCS. (4a-4c) run a two-step training, first training the model only with augmented data and then fine-tuning with rCS. Our early hypothesis is that the results from (2a) and (2b) will not be as good as the baseline, but when we combine them, they will outperform the baseline. We expect the result of (2c) to be on par with (1), since Pointer-Gen learns patterns from the rCS dataset, and generates sequences with similar code-switching points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LM Training Strategy Comparison",
"sec_num": "4.2"
},
{
"text": "In this section, we present the settings we use to generate code-switching data, and train our language model and end-to-end ASR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.3"
},
{
"text": "The pointer-generator model has 500-dimensional hidden states. We use 50k words as our vocabulary for the source and target. We optimize the training by Stochastic Gradient Descent with an initial learning rate of 1.0 and decay of 0.5. We generate the three best sequences using beam search with five beams, and sample 270,531 sentences, thrice the amount of the code-switched training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Gen",
"sec_num": null
},
{
"text": "EC We generate 270,531 sentences, thrice the amount of the code-switched training data. To make a fair comparison, we limit the number of switches to two for each sentence to get a similar number of code-switches (SPF and CMI) to Pointer-Gen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pointer-Gen",
"sec_num": null
},
{
"text": "We implement the SeqGAN model using a PyTorch implementation 5 , and use our best trained LM baseline as the generator in SeqGAN. We sample 270,531 sentences from the generator, thrice the amount of the code-switched training data (with a maximum sentence length of 20).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": null
},
{
"text": "LM In this work, we focus on sentence generation, so we evaluate our data with the same twolayer LSTM LM for comparison. It is trained using a two-layer LSTM with a hidden size of 200 and unrolled for 35 steps. The embedding size is equal to the LSTM hidden size for weight tying (Press and Wolf, 2017) . We optimize our model using SGD with an initial learning rate of 20. If there is no improvement during the evaluation, we reduce the learning rate by a factor of 0.75. In each step, we apply a dropout to both the embedding layer and recurrent network. The gradient is clipped to a maximum of 0.25. We optimize the validation loss and apply an early stopping procedure after five iterations without any improvements. In the fine-tuning step of training strategies (4a-4c), the initial learning rate is set to 1.",
"cite_spans": [
{
"start": 280,
"end": 302,
"text": "(Press and Wolf, 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": null
},
{
"text": "End-to-end ASR We convert the inputs into normalized frame-wise spectrograms from 16-kHz audio. Our transformer model consists of two encoder and decoder layers. An Adam optimizer and Noam warmup are used for training with an initial learning rate of 1e-4. The model has a hidden size of 1024, a key dimension of 64, Table 3 : Results of perplexity (PPL) on a valid set and test set for different training strategies. We report the overall PPL, and code-switching points (en-zh) and (zh-en), as well as the monolingual segments PPL (en-en) and (zh-zh). and a value dimension of 64. The training data are randomly shuffled every epoch. Our character set is the concatenation of English letters, Chinese characters found in the corpus, spaces, and apostrophes. In the multilingual ASR pretraining, we train the model for 18 epochs. Since the sizes of the datasets are different, we over-sample the smaller dataset. The fine-tuning step takes place after the pretraining using code-switching data. In the inference time, we explore the hypothesis using beam search with eight beams and a batch size of 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 317,
"end": 324,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": null
},
{
"text": "We employ the following metrics to measure the performance of our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "Token-level Perplexity (PPL) For the LM, we calculate the PPL of characters in Mandarin Chinese and words in English. The reason is that some Chinese words inside the SEAME corpus are not well tokenized, and tokenization results are not consistent. Using characters instead of words in Chinese can alleviate word boundary issues. The PPL is calculated by taking the exponential of the sum of losses. To show the effectiveness of our approach in calculating the probability of the switching, we split the perplexity computation into monolingual segments (en-en) and (zh-zh), and code-switching segments (en-zh) and (zh-en).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "Character Error Rate (CER) For our ASR, we compute the overall CER and also show the individual CERs for Mandarin Chinese (zh) and English (en). The metric calculates the distance of two sequences as the Levenshtein Distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "LM In Table 3 , we can see the perplexities of the test set evaluated on different training strategies. Pointer-Gen consistently performs better than state-of-the-art models such as EC and Se-qGAN. Comparing the results of models trained using only generated samples, (2a-2b) Figure 4 : The visualization of pointer-generator attention weights on input words in each time-step during the inference time. The y-axis indicates the generated sequence, and the x-axis indicates the word input. In this figure, we show the code-switching points when our model attends to words in the L 1 and L 2 sentences: left: (\"no\",\"\u6ca1 \u6709\") and (\"then\",\"\u7136\u540e\"), right: (\"we\",\"\u6211\u4eec\"), (\"share\", \"\u4e00\u8d77\") and (\"room\",\"\u623f\u95f4\").",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 13,
"text": "Table 3",
"ref_id": null
},
{
"start": 268,
"end": 275,
"text": "(2a-2b)",
"ref_id": "FIGREF0"
},
{
"start": 276,
"end": 284,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "the undesirable results that are also mentioned by Pratapa et al. (2018) , but it does not apply to Pointer-Gen (2c). We can achieve a similar results with the model trained using only real codeswitching data, rCS. This demonstrates the quality of our data generated using Pointer-Gen. In general, combining any generated samples with real code-switching data improves the language model performance for both code-switching segments and monolingual segments. Applying concatenation is less effective than the two-step training strategy. Moreover, applying the two-step training strategy achieves the state-of-the-art performance.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "Pratapa et al. (2018)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "As shown in Table 2 , we generate new n-grams including code-switching phrases. This leads us to a more robust model, trained with both generated data and real code-switching data. We can see clearly that Pointer-Gen-generated samples have a distribution more similar to the real codeswitching data compared with SeqGAN, which shows the advantage of our proposed method.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "5"
},
{
"text": "To understand the importance of data size, we train our model with different amounts of generated data. Figure 3 shows the PPL of the models with different amounts of generated data. An interesting finding is that our model trained with only 78K samples of Pointer-Gen data (same number of samples as rCS) achieves a similar PPL to the model trained with only rCS, while SeqGAN and EC have a significantly higher PPL. We can also see that 10K samples of Pointer-Gen data is as good as 270K samples of EC data. In general, the number of samples is positively correlated with the improvement in performance. ",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of Data Size",
"sec_num": null
},
{
"text": "We evaluate our proposed sentence generation method on an end-to-end ASR system. Table 4 shows the CER of our ASR systems, as well as the individual CER on each language. Based on the experimental results, pretraining is able to reduce the error rate by 1.64%, as it corrects the spelling mistakes in the prediction. After we add LM (rCS) to the decoding step, the error rate can be reduced to 32.25%. Finally, we replace the LM with LM (Pointer-Gen \u2192 rCS), and it further decreases the error rate by 1.18%.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "ASR Evaluation",
"sec_num": null
},
{
"text": "We can interpret a Pointer-Gen model by extracting its attention matrices and then analyzing the activation scores. We show the visualization of the attention weights in Figure 4 . The square in the heatmap corresponds to the attention score of an input word. In each time-step, the attention scores are used to select words to be generated. As we can observe in the figure, in some cases, our model attends to words that are translations of each other, for example, the words (\"no\",\"\u6ca1\u6709\"), (\"then\",\"\u7136 \u540e\") , (\"we\",\"\u6211 \u4eec\"), (\"share\", \"\u4e00 \u8d77\"), and (\"room\",\"\u623f \u95f4\"). This indicates the model can identify code-switching points, word alignments, and translations without being given any explicit information. : The most common English and Mandarin Chinese part-of-speech tags that trigger code-switching. We report the frequency ratio from Pointer-Gen-generated sentences compared to the real code-switching data. We also provide an example for each POS tag. Table 5 shows the most common English and Mandarin Chinese POS tags that trigger code-switching. The distribution of word triggers in the Pointer-Gen data are similar to the real code-switching data, indicating our model's ability to learn similar code-switching points. Nouns are the most frequent English word triggers. They are used to construct an optimal interaction by using cognate words and to avoid confusion. Also, English adverbs such as \"then\" and \"so\" are phrase or sentence connectors between two language phrases for intra-sentential and intersentential code-switching. On the other hand, Chinese transitional words such as the measure word \"\u4e2a\" or associative word \"\u7684\" are frequently used as inter-lingual word associations.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 4",
"ref_id": null
},
{
"start": 950,
"end": 957,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Model Interpretability",
"sec_num": null
},
{
"text": "Code-switching language modeling research has been focused on building a model that handles mixed-language sentences and on generating synthetic data to solve the data scarcity issue. The first statistical approach using a linguistic theory was introduced by Li and Fung (2012) , who adapted the EC on monolingual sentence pairs during the decoding step of an ASR system. Ying and Fung (2014) implemented a functional-head constraint lattice parser with a weighted finite-state transducer to reduce the search space on a codeswitching ASR system. Then, Adel et al. (2013a) extended recurrent neural networks (RNNs) by adding POS information to the input layer and a factorized output layer with a language identifier. The factorized RNNs were also combined with an n-gram backoff model using linear interpolation (Adel et al., 2013b) , and syntactic and semantic features were added to them (Adel et al., 2015) . Baheti et al. (2017) adapted an effective curriculum learning by training a network with monolingual corpora of two languages, and subsequently trained on code-switched data. A further investigation of EC and curriculum learning showed an improvement in English-Spanish language modeling (Pratapa et al., 2018) , and a multitask learning approach was introduced to train the syntax representation of languages by constraining the language generator (Winata et al., 2018a). Garg et al. (2018) proposed to use SeqGAN (Yu et al., 2017) for generating new mixed-language sequences. Winata et al. (2018b) leveraged character representations to address out-of-vocabulary words in the code-switching named entity recognition. Finally, proposed a method to represent code-switching sentence using language-agnostic meta-representations.",
"cite_spans": [
{
"start": 259,
"end": 277,
"text": "Li and Fung (2012)",
"ref_id": "BIBREF9"
},
{
"start": 372,
"end": 392,
"text": "Ying and Fung (2014)",
"ref_id": "BIBREF26"
},
{
"start": 553,
"end": 572,
"text": "Adel et al. (2013a)",
"ref_id": "BIBREF1"
},
{
"start": 813,
"end": 833,
"text": "(Adel et al., 2013b)",
"ref_id": "BIBREF2"
},
{
"start": 891,
"end": 910,
"text": "(Adel et al., 2015)",
"ref_id": "BIBREF0"
},
{
"start": 913,
"end": 933,
"text": "Baheti et al. (2017)",
"ref_id": "BIBREF3"
},
{
"start": 1201,
"end": 1223,
"text": "(Pratapa et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 1386,
"end": 1404,
"text": "Garg et al. (2018)",
"ref_id": "BIBREF8"
},
{
"start": 1428,
"end": 1445,
"text": "(Yu et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We propose a novel method for generating synthetic code-switching sentences using Pointer-Gen by learning how to copy words from parallel cor-pora. Our model can learn code-switching points by attending to input words and aligning the parallel words, without requiring any word alignments or constituency parsers. More importantly, it can be effectively used for languages that are syntactically different, such as English and Mandarin Chinese. Our language model trained using outperforms equivalence constraint theory-based models. We also show that the learned language model can be used to improve the performance of an endto-end automatic speech recognition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Code-switching refers to mixing of languages following the definitions inPoplack (1980). We use \"intra-sentential code-switching\" interchangeably with \"code-mixing\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". Evaluate generated sequences from step (2) if they satisfy the EC theory.3 End-to-End Code-Switching ASRTo show the effectiveness of our proposed method, we build a transformer-based end-to-end code-2 The code implementation can be found at https://github.com/clab/fast_align.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Experiments4.1 Data PreparationWe use speech data from SEAME Phase II, a conversational English-Mandarin Chinese code-switching speech corpus that consists of spontaneously spoken interviews and conversations (Nanyang Technological University, 2015). We split the corpus following information",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset is available at https://voice.mozilla.org/.4 We have attached the translated data in the Supplementary Materials.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To implement SeqGAN, we use code from https://github.com/suragnair/seqGAN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government, and School of Engineering Ph.D. Fellowship Award, the Hong Kong University of Science and Technology, and RDC 1718050-0 of EMOS.AI. We sincerely thank the three anonymous reviewers for their insightful comments on our paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Syntactic and semantic features for code-switching factored language models",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Kirchhoff",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "23",
"issue": "3",
"pages": "431--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel, Ngoc Thang Vu, Katrin Kirchhoff, Do- minic Telaar, and Tanja Schultz. 2015. Syntactic and semantic features for code-switching factored language models. IEEE Transactions on Audio, Speech, and Language Processing, 23(3):431-440.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Recurrent neural network language modeling for code switching conversational speech",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Franziska",
"middle": [],
"last": "Kraus",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Schlippe",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2013,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "8411--8415",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel, Ngoc Thang Vu, Franziska Kraus, Tim Schlippe, Haizhou Li, and Tanja Schultz. 2013a. Recurrent neural network language modeling for code switching conversational speech. In Acous- tics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8411- 8415. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combination of recurrent neural networks and factored language models for code-switching language modeling",
"authors": [
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "206--211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heike Adel, Ngoc Thang Vu, and Tanja Schultz. 2013b. Combination of recurrent neural networks and fac- tored language models for code-switching language modeling. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 206-211.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Curriculum design for code-switching: Experiments with language identification and language modeling with deep neural networks",
"authors": [
{
"first": "Ashutosh",
"middle": [],
"last": "Baheti",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ICON",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashutosh Baheti, Sunayana Sitaram, Monojit Choud- hury, and Kalika Bali. 2017. Curriculum design for code-switching: Experiments with language iden- tification and language modeling with deep neural networks. Proceedings of ICON, pages 65-74.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Code switching and x-bar theory: The functional head constraint. Linguistic inquiry",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hedi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Belazi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Edward",
"suffix": ""
},
{
"first": "Almeida Jacqueline",
"middle": [],
"last": "Rubin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Toribio",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "221--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hedi M Belazi, Edward J Rubin, and Almeida Jacque- line Toribio. 1994. Code switching and x-bar the- ory: The functional head constraint. Linguistic in- quiry, pages 221-237.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Linguistic constraints on intrasentential code-switching: A study of spanish/hebrew bilingualism",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Berk-Seligson",
"suffix": ""
}
],
"year": 1986,
"venue": "Language in society",
"volume": "15",
"issue": "3",
"pages": "313--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Berk-Seligson. 1986. Linguistic constraints on intrasentential code-switching: A study of span- ish/hebrew bilingualism. Language in society, 15(3):313-348.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-648. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On measuring the complexity of code-mixing",
"authors": [
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bj\u00f6rn Gamb\u00e4ck and Amitava Das. 2014. On measuring the complexity of code-mixing. In Proceedings of the 11th International Conference on Natural Lan- guage Processing, Goa, India, pages 1-7. Citeseer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Code-switched language models using dual rnns and same-source pretraining",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Tanmay",
"middle": [],
"last": "Parekh",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Jyothi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3078--3083",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saurabh Garg, Tanmay Parekh, and Preethi Jyothi. 2018. Code-switched language models using dual rnns and same-source pretraining. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 3078-3083.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Code-switch language model with inversion constraints for mixed language speech recognition",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COL-ING 2012",
"volume": "",
"issue": "",
"pages": "1671--1680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Li and Pascale Fung. 2012. Code-switch lan- guage model with inversion constraints for mixed language speech recognition. Proceedings of COL- ING 2012, pages 1671-1680.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning comment generation by leveraging user-generated data",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7225--7229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2019. Learning comment generation by leveraging user-generated data. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7225-7229. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hkust/mts: A very large scale mandarin telephone speech corpus",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Yongsheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
},
{
"first": "Shudong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
}
],
"year": 2006,
"venue": "Chinese Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "724--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Liu, Pascale Fung, Yongsheng Yang, Christopher Cieri, Shudong Huang, and David Graff. 2006. Hkust/mts: A very large scale mandarin telephone speech corpus. In Chinese Spoken Language Pro- cessing, pages 724-735. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Stanford CoreNLP natural language processing toolkit",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "Steven",
"middle": [
"J"
],
"last": "Bethard",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Closky",
"suffix": ""
}
],
"year": 2014,
"venue": "Association for Computational Linguistics (ACL) System Demonstrations",
"volume": "",
"issue": "",
"pages": "55--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The matrix language frame model: Development and responses",
"authors": [
{
"first": "Carol",
"middle": [],
"last": "Myers-Scotton",
"suffix": ""
}
],
"year": 2001,
"venue": "Trends in Linguistics Studies and Monographs",
"volume": "126",
"issue": "",
"pages": "23--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol Myers-Scotton. 2001. The matrix language frame model: Development and responses. Trends in Linguistics Studies and Monographs, 126:23-58.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mandarin-english code-switching in south-east asia ldc2015s04. web download. philadelphia: Linguistic data consortium",
"authors": [],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Universiti Sains Malaysia Nanyang Technological Uni- versity. 2015. Mandarin-english code-switching in south-east asia ldc2015s04. web download. philadel- phia: Linguistic data consortium.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Constraints on language mixing: intrasentential code-switching and borrowing in spanish/english. Language",
"authors": [
{
"first": "W",
"middle": [],
"last": "Carol",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pfaff",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "291--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol W Pfaff. 1979. Constraints on language mix- ing: intrasentential code-switching and borrowing in spanish/english. Language, pages 291-318.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Syntactic structure and social function of code-switching",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 1978,
"venue": "Centro de Estudios Puertorrique\u00f1os",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shana Poplack. 1978. Syntactic structure and social function of code-switching, volume 2. Centro de Estudios Puertorrique\u00f1os,[City University of New York].",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Sometimes i'll start a sentence in spanish y termino en espanol: toward a typology of code-switching1",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 1980,
"venue": "Linguistics",
"volume": "18",
"issue": "7-8",
"pages": "581--618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shana Poplack. 1980. Sometimes i'll start a sentence in spanish y termino en espanol: toward a typology of code-switching1. Linguistics, 18(7-8):581-618.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "sometimes i'll start a sentence in spanish y termino en espa\u00f1ol\": Toward a typology of code-switching",
"authors": [
{
"first": "Shana",
"middle": [],
"last": "Poplack",
"suffix": ""
}
],
"year": 2013,
"venue": "Linguistics",
"volume": "51",
"issue": "",
"pages": "11--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shana Poplack. 2013. \"sometimes i'll start a sentence in spanish y termino en espa\u00f1ol\": Toward a typology of code-switching. Linguistics, 51(Jubilee):11-14.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language modeling for code-mixing: The role of linguistic theory based synthetic data",
"authors": [
{
"first": "Adithya",
"middle": [],
"last": "Pratapa",
"suffix": ""
},
{
"first": "Gayatri",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1543--1553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adithya Pratapa, Gayatri Bhat, Monojit Choudhury, Sunayana Sitaram, Sandipan Dandapat, and Kalika Bali. 2018. Language modeling for code-mixing: The role of linguistic theory based synthetic data. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1543-1553.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using the output embedding to improve language models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "157--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the output em- bedding to improve language models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, volume 2, pages 157-163.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning multilingual meta-embeddings for code-switching named entity recognition",
"authors": [
{
"first": "Zhaojiang",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 4th Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "181--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Zhaojiang Lin, and Pascale Fung. 2019. Learning multilingual meta-embeddings for code-switching named entity recognition. In Pro- ceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 181- 186.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Code-switching language modeling using syntax-aware multi-task learning",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "62--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. 2018a. Code-switching language modeling using syntax-aware multi-task learning. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching, pages 62-67. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bilingual character representation for efficiently addressing outof-vocabulary words in code-switching named entity recognition",
"authors": [
{
"first": "Chien-Sheng",
"middle": [],
"last": "Genta Indra Winata",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Madotto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching",
"volume": "",
"issue": "",
"pages": "110--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Genta Indra Winata, Chien-Sheng Wu, Andrea Madotto, and Pascale Fung. 2018b. Bilingual char- acter representation for efficiently addressing out- of-vocabulary words in code-switching named entity recognition. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code- Switching, pages 110-114.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Language modeling with functional head constraint for code switching speech recognition",
"authors": [
{
"first": "L",
"middle": [
"I"
],
"last": "Ying",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "907--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LI Ying and Pascale Fung. 2014. Language model- ing with functional head constraint for code switch- ing speech recognition. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 907-916.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese",
"authors": [
{
"first": "Shiyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Linhao",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Shuang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiyu Zhou, Linhao Dong, Shuang Xu, and Bo Xu. 2018. Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chi- nese. In Interspeech.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Example of equivalence constraint(Li and Fung, 2012). Solid lines show the alignment between the matrix language (top) and the embedded language (bottom). The dotted lines denote impermissible switching."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Results of perplexity (PPL) on different numbers of generated samples. The graph shows that Pointer-Gen attains a close performance to the real training data, and outperforms SeqGAN and EC."
},
"TABREF0": {
"text": "Pointer-Gen model, which includes an RNN encoder and RNN decoder. The parallel sentence is the input of the encoder, and in each decoding step, the decoder generates a new token.",
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Codeswitching sentence</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">\u6211 \u8981 \u53bb check</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">(I 'm going to check)</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Final</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Distribution</td></tr><tr><td/><td/><td>\u00d7</td><td>( 1</td><td>\u2212</td><td>p</td><td>g e n</td><td>)</td><td/><td/><td/><td/><td/><td>\u00d7</td><td>p</td><td>g e n</td></tr><tr><td/><td/><td/><td colspan=\"4\">Attention</td><td/><td/><td/><td/><td/><td/><td>Vocabulary</td></tr><tr><td/><td/><td colspan=\"6\">Distribution</td><td/><td/><td/><td/><td/><td>Distribution</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">context vector</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>p</td><td>g e n</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Attention</td><td/><td/></tr><tr><td colspan=\"2\">RNN</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>RNN</td></tr><tr><td colspan=\"2\">Encoder</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Decoder</td></tr><tr><td/><td/><td>i</td><td/><td colspan=\"2\">'m</td><td/><td>going</td><td>to</td><td>check</td><td>\u6211</td><td>\u8981</td><td colspan=\"2\">\u53bb</td><td>\u68c0</td><td>\u67e5</td><td>&lt;SOS&gt; \u6211</td><td>\u8981</td><td>\u53bb</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"3\">Parallel sentence</td><td/><td/><td>Decoder input</td></tr><tr><td>Figure 1: this</td><td>is</td><td colspan=\"6\">actually belonged</td><td>to</td><td colspan=\"3\">simplified chinese</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>)</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF6": {
"text": "",
"content": "<table><tr><td>: ASR evaluation, showing the performance</td></tr><tr><td>on all sequences (Overall), English segments (en), and</td></tr><tr><td>Mandarin Chinese segments (zh).</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF8": {
"text": "",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}