ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:22.717076Z"
},
"title": "Semi-supervised Acoustic Modelling for Five-lingual Code-switched ASR using Automatically-segmented Soap Opera Speech",
"authors": [
{
"first": "Nick",
"middle": [],
"last": "Wilkinson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": "[email protected]"
},
{
"first": "Astik",
"middle": [],
"last": "Biswas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": "[email protected]"
},
{
"first": "Emre",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Febe",
"middle": [],
"last": "De Wet",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": ""
},
{
"first": "Ewald",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": "[email protected]"
},
{
"first": "Thomas",
"middle": [],
"last": "Niesler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stellenbosch University",
"location": {
"country": "South Africa"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper considers the impact of automatic segmentation on the fully-automatic, semi-supervised training of automatic speech recognition (ASR) systems for five-lingual code-switched (CS) speech. Four automatic segmentation techniques were evaluated in terms of the recognition performance of an ASR system trained on the resulting segments in a semi-supervised manner. The systems' output was compared with the recognition rates achieved by a semi-supervised system trained on manually assigned segments. Three of the automatic techniques use a newly proposed convolutional neural network (CNN) model for framewise classification, and include a novel form of HMM smoothing of the CNN outputs. Automatic segmentation was applied in combination with automatic speaker diarization. The best-performing segmentation technique was also tested without speaker diarization. An evaluation based on 248 unsegmented soap opera episodes indicated that voice activity detection (VAD) based on a CNN followed by Gaussian mixture model-hidden Markov model smoothing (CNN-GMM-HMM) yields the best ASR performance. The semi-supervised system trained with the resulting segments achieved an overall WER improvement of 1.1% absolute over the system trained with manually created segments. Furthermore, we found that system performance improved even further when the automatic segmentation was used in conjunction with speaker diarization.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper considers the impact of automatic segmentation on the fully-automatic, semi-supervised training of automatic speech recognition (ASR) systems for five-lingual code-switched (CS) speech. Four automatic segmentation techniques were evaluated in terms of the recognition performance of an ASR system trained on the resulting segments in a semi-supervised manner. The systems' output was compared with the recognition rates achieved by a semi-supervised system trained on manually assigned segments. Three of the automatic techniques use a newly proposed convolutional neural network (CNN) model for framewise classification, and include a novel form of HMM smoothing of the CNN outputs. Automatic segmentation was applied in combination with automatic speaker diarization. The best-performing segmentation technique was also tested without speaker diarization. An evaluation based on 248 unsegmented soap opera episodes indicated that voice activity detection (VAD) based on a CNN followed by Gaussian mixture model-hidden Markov model smoothing (CNN-GMM-HMM) yields the best ASR performance. The semi-supervised system trained with the resulting segments achieved an overall WER improvement of 1.1% absolute over the system trained with manually created segments. Furthermore, we found that system performance improved even further when the automatic segmentation was used in conjunction with speaker diarization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Code-switching is the alternation between two or more languages by a single speaker during discourse, and is a common phenomenon in multilingual societies. In South Africa, for example, 11 official and geographically colocated languages are in use, including English which serves as the lingua franca. Here, speakers frequently codeswitch between English, a highly-resourced language, and their Bantu mother tongue, which is in comparison highly under-resourced. The automatic recognition of code-switched speech has become a topic of growing research interest, as reflected by the increasing number of language pairs that have recently been studied. While English-Mandarin has received extensive attention (Li and Fung, 2013; Zeng et al., 2018; Vu et al., 2012; Taneja et al., 2019) , other language pairs such as Frisian-Dutch (Y\u0131lmaz et al., 2016; Y\u0131lmaz et al., 2018) , Hindi-English (Pandey et al., 2018; Emond et al., 2018; Ganji et al., 2019) , English-Malay (Ahmed and Tan, 2012) , Japanese-English (Nakayama et al., 2018) and French-Arabic (Amazouz et al., 2017) have also attracted interest. We have introduced the first South African corpus of multilingual code-switched soap opera speech in (van der Westhuizen and Niesler, 2018) . For code-switched speech, the development of robust acoustic and language models that are able to extend across language switches is a challenging task. When one or more of the languages are under-resourced, as it is in our case, data sparsity limits modelling capacity and this challenge is amplified. Acoustic data that includes code-switching is extremely hard to find, because it usually does not occur in formal conversation, such as broadcast news, and also because it requires skilled multilingual language practitioners for its annotation. The result is that manually-prepared datasets including code-switched speech in Africa are destined to remain rare and small.",
"cite_spans": [
{
"start": 707,
"end": 726,
"text": "(Li and Fung, 2013;",
"ref_id": "BIBREF17"
},
{
"start": 727,
"end": 745,
"text": "Zeng et al., 2018;",
"ref_id": "BIBREF35"
},
{
"start": 746,
"end": 762,
"text": "Vu et al., 2012;",
"ref_id": null
},
{
"start": 763,
"end": 783,
"text": "Taneja et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 829,
"end": 850,
"text": "(Y\u0131lmaz et al., 2016;",
"ref_id": "BIBREF33"
},
{
"start": 851,
"end": 871,
"text": "Y\u0131lmaz et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 888,
"end": 909,
"text": "(Pandey et al., 2018;",
"ref_id": "BIBREF20"
},
{
"start": 910,
"end": 929,
"text": "Emond et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 930,
"end": 949,
"text": "Ganji et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 966,
"end": 987,
"text": "(Ahmed and Tan, 2012)",
"ref_id": "BIBREF0"
},
{
"start": 1007,
"end": 1030,
"text": "(Nakayama et al., 2018)",
"ref_id": "BIBREF19"
},
{
"start": 1049,
"end": 1071,
"text": "(Amazouz et al., 2017)",
"ref_id": "BIBREF1"
},
{
"start": 1203,
"end": 1241,
"text": "(van der Westhuizen and Niesler, 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In previous work, we have demonstrated that multilingual training using in-domain soap opera code-switched speech and poorly matched monolingual South African speech improves the performance of both bilingual and five-lingual automatic speech recognition (ASR) systems when the additional training data is from a closely-related language (Biswas et al., 2018a; Biswas et al., 2018b) . Specifically, isiZulu, isiXhosa, Sesotho and Setswana belong to the same Bantu language family and were found to complement each other when combined into a multilingual training set for acoustic modelling. Hence, increasing the amount of in-domain code-switched speech data is a reliable way to achieve more robust ASR. However, the development of such in-domain data is a time-consuming and costly endeavour as it requires highly skilled human annotators and transcribers.",
"cite_spans": [
{
"start": 338,
"end": 360,
"text": "(Biswas et al., 2018a;",
"ref_id": "BIBREF3"
},
{
"start": 361,
"end": 382,
"text": "Biswas et al., 2018b)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To address this lack of annotated data, automatically transcribed training material has been shown to be useful in under-resourced scenarios using semi-supervised training (Thomas et al., 2013; Y\u0131lmaz et al., 2018; Guo et al., 2018) . This strategy was successfully implemented on South African code-switched speech to obtain bilingual and five-lingual ASR systems using 11.5 hours of manually segmented but untranscribed soap opera speech (Biswas et al., 2019) . Recently a study has analyzed the performance of batch-wise semi-supervised training on South African code-switched ASR (Biswas et al., 2020) . However, manual segmentation of the raw soap opera audio by skilled annotators was still required to identify the speech that is useful for ASR. Therefore, this approach is not fully automatic which remains an impediment in resource-scare settings.",
"cite_spans": [
{
"start": 172,
"end": 193,
"text": "(Thomas et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 194,
"end": 214,
"text": "Y\u0131lmaz et al., 2018;",
"ref_id": "BIBREF34"
},
{
"start": 215,
"end": 232,
"text": "Guo et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 440,
"end": 461,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 584,
"end": 605,
"text": "(Biswas et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this study, we apply four automated approaches to the segmentation of soap opera speech and investigate the effect on ASR performance. A conventional energy-based voiced activity detector (VAD) (Povey et al., 2011) , as well as CNN-HHM and CNN-GMM-HMM systems that we have developed are used to distinguish between speech, music and noise. In addition, an X-vector DNN embedding system is used for speaker diarization to obtain speaker specific metadata for three of the segmentation approaches (Snyder et al., 2018) . For the experiments, 248 complete soap opera episodes, each approximately 22 minutes in length, were used. It is important to note that we also have the manual segmentation (approximately 24 hours of speech) of these 248 episodes and can therefore perform a comparative evaluation with the automated approaches. Semi-supervised systems trained using the manually-segmented speech were used as baselines and compared with systems trained on speech identified by the automatic approaches. Pseudo-labels or transcriptions of automatically segmented speech were generated using our best baseline systems trained on 21 hours manually transcribed speech and 11 hours of manually segmented but automatically transcribed speech. Given the multilingual nature of the data, the transcription systems must not only provide the orthography, but also the language(s) present at each location in each segment. To achieve this, each segment was presented to four individual code-switching systems as well as to a fivelingual system.",
"cite_spans": [
{
"start": 197,
"end": 217,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 498,
"end": 519,
"text": "(Snyder et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For experimentation, we use a corpus of multilingual, code-switched speech compiled from South African soap opera episodes. This corpus contains both manually and automatically-annotated speech divided into four language pairs: English-isiZulu (EZ), English-isiXhosa (EX), English-Setswana (ET), and English-Sesotho (ES). Of the Bantu languages, isiZulu and isiXhosa belong to the Nguni language family while Setswana and Sesotho are Sotho-Tswana languages. The corpus contains 8 275, 11 352, 6 169, 1 902 and 2 792 unique English, isiZulu, isiXhosa, Setswana and Sesotho words, respectively. IsiZulu and isiXhosa have relatively large vocabularies due their agglutinative nature and conjunctive writing system. Although Setswana and Sesotho are also agglutinative, they use disjunctive writing systems which result in smaller vocabularies than isiZulu and isiXhosa. The speech in the soap opera episodes is also typically fast and often expresses emotion. These aspects of the data in combination with the high prevalence of codeswitching makes it a challenging corpus for conducting ASR experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2."
},
{
"text": "Our first code-switching ASR systems were developed and evaluated on 14.3 hours of speech divided into four language-balanced sets, as described in (van der Westhuizen and Niesler, 2018). In addition to the languagebalanced sets, approximately another nine hours of manually transcribed speech was available. This additional data is dominated by English and was initially excluded from our training set to avoid bias. However, pilot experiments indicated that, counter to expectations, its inclusion enhanced recognition performance in all languages. The additional data was therefore merged with the balanced sets for the experiments described here. Of this, 21.1 hours is used as a training set, 48 minutes as a development set, and 1.3 hours as a test set. The composition of the unbalanced training set is shown in Table 1 . During corpus development, approximately 11 hours of manually segmented speech (representing 127 different speakers) was produced in addition to the manually transcribed data described in the previous section. Segmentation was performed manually by experienced language practitioners. This dataset (AutoT Exp ) was automatically transcribed during our initial investigations into semisupervised acoustic model training, resulting in 7 951 EZ, 3 796 EX, 11 415 ES and 128 ET segments (Biswas et al., 2019) .",
"cite_spans": [
{
"start": 1312,
"end": 1333,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 819,
"end": 826,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Manually Segmented and Transcribed Data (ManT)",
"sec_num": "2.1."
},
{
"text": "Language Mono (m) CS (m) Total (h) Total (%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manually Segmented and Transcribed Data (ManT)",
"sec_num": "2.1."
},
{
"text": "Transcribed Data: Non-expert Segmentation (AutoT NonE )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manually Segmented Automatically",
"sec_num": "2.3."
},
{
"text": "A subsequent phase of corpus development, currently still underway, has produced manual segmentations for a further 248 soap opera episodes. These 248 episodes amount to 89 hours of audio data before segmentation, and 23 hours of speech data (AutoT NonE ) after segmentation. The segmentation was not performed by language experts and is therefore expected to be less accurate than that of the AutoT Exp data. Furthermore South African languages other than the five present in the transcribed data are known to occur in this batch, but to a limited extent. This set of 248 episodes was used in the automatic segmentation experiments described in the next section because the manually assigned segment labels were available as a reference in the form of AutoT NonE .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manually Segmented Automatically",
"sec_num": "2.3."
},
{
"text": "A number of automatic segmentation techniques were considered as alternatives to the labour-intensive process of manually segmenting the soap operas. Different voice activity detection (VAD) approaches were combined with the X-vector DNN embedding-based speaker diarization system introduced in (Snyder et al., 2018) to obtain speaker labels. In subsequent ASR experiments, the best performing VAD technique was also evaluated without speaker diarization.",
"cite_spans": [
{
"start": 295,
"end": 316,
"text": "(Snyder et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Segmentation",
"sec_num": "3."
},
{
"text": "In our first experiment, the X-vector diarization recipe provided in the Kaldi toolkit was applied using an Xvector DNN model pre-trained on wide-band VoxCeleb data (Povey et al., 2011; Nagrani et al., 2017; Chung et al., 2018) . This system uses 24-dimensional filterbank features based on 25ms frames. Speech frames are identified using a simple energy threshold and are subsequently passed to the pre-trained DNN which extracts the X-vectors. Finally, probabilistic linear discriminant analysis (PLDA) is applied to the X-vectors, and agglomerative hierarchical clustering is used to assign speaker labels. A difficulty observed when using this approach was that, while a simple energy VAD works reasonably well under low noise conditions where most frames are speech, it performs poorly when confronted with our soap opera data in which extensive non-speech segments containing music and other sounds are common. Post-diarization listening tests revealed that many non-speech segments were still present in the data classified as speech. Adjustment of the VAD threshold to more aggressively remove non-speech segments resulted in the loss of many speech segments.",
"cite_spans": [
{
"start": 165,
"end": 185,
"text": "(Povey et al., 2011;",
"ref_id": "BIBREF21"
},
{
"start": 186,
"end": 207,
"text": "Nagrani et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 208,
"end": 227,
"text": "Chung et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "VAD 1 : Energy-based",
"sec_num": "3.1."
},
{
"text": "At the time of writing, the X-vector based system achieved state-of-the-art performance in diarization tasks. However, the energy based VAD it uses limits performance. For this reason, efforts to improve automatic segmentation focused on developing improved VAD. Recently, CNNs have been successfully applied to the task of VAD (Thomas et al., 2014) , and both large and small architectures have been found to perform well (Sehgal and Kehtarnavaz, 2018; Hershey et al., 2017) . In our resource-constrained setting, computational efficiency is important since VAD will most likely occur on a mobile device. We introduce a small CNN architecture (\u2248120 000 parameters) implemented in Python, using Tensorflow (v2.0.0) and Keras (v2.2.4-tf), to create a fast, lightweight VAD system whose architecture is shown in Table 3 . This system computes 32-dimensional log-mel filterbank energies using a frame length of 10ms and then stacks these over 320ms to form 32x32 spectrogram features as input to the CNN. The CNN was trained on the balanced subset (\u2248 53 hours) of Audio Set (Gemmeke et al., 2017) to classify frames as containing \"speech\" and/or \"non-speech\". While our CNN on its own performs well at the VAD task, it fails to capture temporal patterns in the data and was observed to often mislabel single frames within extended sections of speech or non-speech. In an initial attempt to address this, we introduce a HMM for smoothing. The AVA-Speech dataset (Chaudhuri et al., 2018) was used to train our HMMs, and for testing of the final VAD. The full dataset contains \u2248 46 hours of densely labeled, multilingual movie data, with the following class labels: \"NoSpeech\",\"CleanSpeech\", \"Speech+Music\" and \"Speech+Noise\". AVA-Speech (train), a randomly selected \u2248 23 hour subset of the dataset, was used to train the HMM. Table 4 provides a description of the dataset used for training and testing of the automatic segmentation systems. For training the \"CleanSpeech\", \"Speech+Music\" and \"Speech+Noise\" classes were treated as a single \"speech\" class. A two state HMM was defined, with states representing ground truth \"speech\" and \"no-speech\" labels respectively. The HMM observations are the binary output of the \"speech\" neuron in the CNN, which indicates \"speech\" or \"no-speech\". Note, the \"no-speech\" label differs slightly from the \"non-speech\" label, since the \"speech\" and \"nonspeech\" sounds can co-occur, whereas \"no-speech\" implies \"speech\" does not occur in the signal. Transition and emission probabilities were trained in a supervised manner, by passing AVA-Speech (train) though the CNN, then using the labels predicted by the CNN and corresponding ground truth labels as observations and hid- den state sequences respectively. Viterbi decoding was then used to find the most likely underlying label sequence, given CNN predicted labels. Finally the VAD segments are used as input to the X-vector diarization system.",
"cite_spans": [
{
"start": 328,
"end": 349,
"text": "(Thomas et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 423,
"end": 453,
"text": "(Sehgal and Kehtarnavaz, 2018;",
"ref_id": "BIBREF24"
},
{
"start": 454,
"end": 475,
"text": "Hershey et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 1071,
"end": 1093,
"text": "(Gemmeke et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 1458,
"end": 1482,
"text": "(Chaudhuri et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 810,
"end": 817,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1821,
"end": 1828,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "VAD 2 : CNN-HMM",
"sec_num": "3.2."
},
{
"text": "While the CNN-HMM approach yields a large improvement over the energy-based VAD, it may be possible to improve it further by making use of the CNN soft label outputs, rather than the hard labels obtained by taking the argmax of the CNN outputs. In this case the HMM observation sequence is chosen to consist of the output probabilities computed by the CNN speech neuron, rather than the binary labels. Where the observations were previously modelled as repeated Bernoulli trials, they are now continuous and can therefore be modelled by a more complex distribution function. A 3-mixture GMM for each of the two HMM states was found to be an effective choice. Fewer mixtures led to deteriorated performance, while more mixtures did not result in further improvement. As before, the GMM-HMM is trained on AVA-Speech (train) in a supervised manner and the resulting segments used as input for the X-vector diarization system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VAD 3 : CNN-GMM-HMM",
"sec_num": "3.3."
},
{
"text": "While the X-vector diarization system is useful for obtaining speaker labels for each segment, it is computationally expensive and represents only a pre-processing step for downstream ASR. To determine its importance, our final experiment used the segments produced by our best performing VAD system directly, without diarization. Hence each segment was treated as being from a different speaker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VAD 4 : VAD 3 Without Speaker Diarization",
"sec_num": "3.4."
},
{
"text": "Recent studies demonstrated that semi-supervised training can improve the performance of Frisian-Dutch codeswitched ASR (Y\u0131lmaz et al., 2018) as well as South African code-switched ASR (Biswas et al., 2019) . The approach taken in this study is illustrated in Figure 1 . The figure shows the two phases of semi-supervised training for the parallel bilingual as well as five-lingual configurations: automatic transcription followed by bilingual semisupervised acoustic model retraining. The five-lingual system was not retrained with the automatically transcribed data for this set of experiments as our primary motive was to study the effect of automatic segmented speech on bilingual semi-supervised ASR.",
"cite_spans": [
{
"start": 120,
"end": 141,
"text": "(Y\u0131lmaz et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 185,
"end": 206,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Automatic Transcription",
"sec_num": "4."
},
{
"text": "This system (System A) consists of four subsystems, each corresponding to a language pair for which code-switching occurs. Acoustic models were trained on the manually segmented and transcribed soap opera data (ManT, described in Section 2.1) pooled with the manually segmented but automatically transcribed speech (AutoT Exp , introduced in Section 2.2). Because the languages spoken in the untranscribed data were unknown, each segment was decoded in parallel by each of the bilingual decoders. The output with the highest confidence score provided both the transcription and a language pair label for each segment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Bilingual Transcription",
"sec_num": "4.1."
},
{
"text": "Some of our previous experiments indicated that the automatic transcriptions generated by the five-lingual baseline model enhanced the performance of the bilingual semisupervised systems (Biswas et al., 2019) . Five-lingual transcriptions were therefore also included in this study. The five-lingual system (System H) is based on a single acoustic model trained on all five languages. It was trained on the same data as the bilingual systems, except for the fact that the AutoT Exp data was transcribed using a five-lingual baseline model. Since the five-lingual system is not restricted to bilingual output, its output allows Bantu-to-Bantu language switching. Examples of such switches were indeed observed in the transcriptions. Moreover, the automatically generated transcriptions sometimes contained more than two languages. Although the use of more than two languages within a single segment is not common, we have observed such cases during the compilation of the manually transcribed dataset. For our fast, continuous speech, the automatically generated segments have been observed to produce longer segments than manual segmentation of the data. This increases the likelihood of multiple language switches within the segment. Unfortunately, since the automatic segments are generated from untranscribed data, the degree to which multiple languages occur within a single automatic segment is difficult to quantify.",
"cite_spans": [
{
"start": 187,
"end": 208,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Five-lingual Transcription",
"sec_num": "4.2."
},
{
"text": "All acoustic models were trained using the Kaldi ASR toolkit (Povey et al., 2011) and the data described in Section 2. The models were trained on a multilingual dataset that included all the data in Table 1 . In addition, three-fold data augmentation (Ko et al., 2015) was applied prior to feature extraction. The feature set included standard 40dimensional MFCCs (no derivatives), 3-dimensional pitch and 100 dimensional i-vectors. The models were trained with lattice free MMI (Povey et al., 2016) using the standard Kaldi CNN-TDNN-F (Povey et al., 2018) Librispeech recipe (6 CNN layers and 10 timedelay layers followed by a rank reduction layer) and the default hyperparameters. All acoustic models consist of a single shared softmax layer for all languages, as in general there is more than one target language in a segment. No phone merging was performed between languages and the acoustic models were all language dependent. For the bilingual experiments, the multilingual acoustic models were adapted to each of the four target language pairs. ",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF21"
},
{
"start": 251,
"end": 268,
"text": "(Ko et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 479,
"end": 499,
"text": "(Povey et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 536,
"end": 556,
"text": "(Povey et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Acoustic Modelling",
"sec_num": "5.1."
},
{
"text": "The EZ, EX, ES, ET vocabularies contained 11 292, 8 805, 4 233, 4 957 word types respectively and were closed with respect to the training, development and test sets. The vocabularies were closed since the small datasets and the agglutinative character of the Bantu languages would otherwise lead to very high out-of-vocabulary rates. The SRILM toolkit (Stolcke, 2002) was used to train and evaluate all language models (LMs).",
"cite_spans": [
{
"start": 353,
"end": 368,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.2."
},
{
"text": "Transcriptions of the balanced subset of the ManT dataset as well as monolingual English and Bantu out-of-domain text were used to develop trigram language models. Four bilingual and one five-lingual trigram language model were used for the transcription systems as well as for semisupervised training (Y\u0131lmaz et al., 2018; Biswas et al., 2019) . Table 5 summarises the development and test set perplexities for the bilingual LMs. Details on the monolingual and code-switch perplexities are only provided for the test set (columns 3 to 6 in Table 5 ). The test set perplexities of the five-lingual LM are 1007.1, 1881.8, 345.3, and 277.5 for EZ, EX, ES and ET respectively. Further details regarding the five-lingual perplexities can be found in (Biswas et al., 2019) . Much more monolingual English text was available for language model development than text in the Bantu languages (471M vs 8M words). Therefore, the monolingual perplexity (MPP) is much higher for the Bantu languages than for English for each language pair. Code-switch perplexities (CPP) for language switches indicate the uncertainty of the first word following a language switch. EB corresponds to switches from English to a Bantu language and BE indicates a switch in the other direction. Table 5 shows that the CPP for switching from English to isiZulu and isiXhosa is much higher than switching from these languages to English. This can be ascribed to the much larger isiZulu and isiXhosa vocabularies, which are, in turn, due to the high degree of agglutination and the use of conjunctive orthography in these languages. The CPP for switching from English to Sesotho and Setswana is found to be lower than switching from those languages to English. We believe that this difference is due to the much larger English training set. The CPP values are even higher for the five-lingual language model. This is because the five-lingual trigrams allow language switches not permitted by the bilingual models.",
"cite_spans": [
{
"start": 302,
"end": 323,
"text": "(Y\u0131lmaz et al., 2018;",
"ref_id": "BIBREF34"
},
{
"start": 324,
"end": 344,
"text": "Biswas et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 746,
"end": 767,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 541,
"end": 548,
"text": "Table 5",
"ref_id": "TABREF8"
},
{
"start": 1262,
"end": 1269,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.2."
},
{
"text": "For semi-supervised ASR, lattice-based supervision was combined with the lattice-free MMI objective function (Manohar et al., 2018; Carmantini et al., 2019) . Conventionally, semi-supervised training only considers the best path while lattice-based supervision uses the entire decoding lattice. Hence, the latter approach allows the model to learn from alternative hypotheses when the best path is not accurate. Table 6 gives an overview of the bilingual ASR systems that were trained using the manually segmented data (System B) as well as five different versions of the automatically segmented data (Systems C-G & I). In addition to manuallytranscribed speech, ManT, the AutoT Exp data was also included in all the training sets. Also defined in Table 6 are systems A and H, which are the bilingual and five-lingual baseline systems respectively, trained only on the ManT and AutoT Exp data. These baseline systems were used to obtain automatic transcriptions, AutoT A and AutoT H , for each version of the additional data shown in Table 6 . These automatic transcriptions were subsequently used to train new acoustic models. VAD 2Sub was included to enable a fair comparison between automatic and manual segmentation. This is a 21-hour, randomly selected subset of the VAD 2 data which is comparable in size to the manually-segmented dataset (AutoT NonE ). ",
"cite_spans": [
{
"start": 109,
"end": 131,
"text": "(Manohar et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 132,
"end": 156,
"text": "Carmantini et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 748,
"end": 755,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 1034,
"end": 1041,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Semi-supervised Training",
"sec_num": "6."
},
{
"text": "The next three subsections concern results of the systems described in Sections 3., 4. and 5. Finally, ASR results are presented for specific languages, as well as at codeswitching points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results & Discussion",
"sec_num": "7."
},
{
"text": "AVA-Speech (test), which is the \u2248 23 hour subset of AVA-Speech not used for HMM training, was used as a test set to evaluate VAD performance. This dataset provides similar conditions to our target domain of soap opera data, as well as dense voice activity labels. Furthermore, it is accompanied by baseline results for the WebRTC project VAD (We-bRTC.org, 2011) as well as two CNN-based systems based on the architecture proposed in (Hershey et al., 2017) . The smaller of these two CNN-based systems, tiny320, is similar in size to our CNN, also containing three convolutional layers, while the other, resnet960, is based on the much larger ResNet-50 architecture (He et al., 2016) . Frame-based true positive rates (TPR) for a fixed false positive rate (FPR), scored over 10ms frames are shown for all VAD systems in Table 7 . To allow comparison, all VAD systems were tuned to achieve a FPR of 0.315, as described in (Chaudhuri et al., 2018) . TPR is reported for each individual speech condition (clean speech, speech with noise and speech with music) as well as for all conditions combined.",
"cite_spans": [
{
"start": 433,
"end": 455,
"text": "(Hershey et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 665,
"end": 682,
"text": "(He et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 920,
"end": 944,
"text": "(Chaudhuri et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 819,
"end": 826,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Automatic Segmentation",
"sec_num": "7.1."
},
{
"text": "As expected, VAD 1 performs poorly. However, it is interesting to note that it is the only system that performs better for the \"Noise\" and \"Music\" conditions than for the \"Clean\" condition. This is because noisy signals tend to have more energy than their clean counterparts, making noisy signals more likely to exceed an energy threshold. A large performance improvement is seen for VAD 2 which uses the CNN-HMM. In particular, this system already outperforms tiny320. A smaller performance increase is re- ported for VAD 3 which uses the CNN-GMM-HMM. However, this increase brings its performance to a level comparable to the much larger resnet960 system. In the case of \"Music\", VAD 3 outperforms resnet960, whilst for \"All\" the TPR of VAD 3 is within 1% absolute. In terms of computational complexity, the energy VAD is about 30 times faster than the CNN based VADs. However, the speaker diarization system is two orders of magnitude slower than the slowest VAD, making the compute times of VAD 1 , VAD 2 and VAD 3 are roughly equivalent. VAD 4 , which removes the speaker diarization, is much faster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Segmentation",
"sec_num": "7.1."
},
{
"text": "The automatic transcription outputs of the bilingual (System A) and five-lingual (System H) baseline systems are summarised in Table 8 . The first five rows of the table correspond to segments that were classified as monolingual while the last row shows the number of segments that contain code-switching. The values in this row reveal a high number of code-switched segments in the additional data.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 8",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Automatic Transcription",
"sec_num": "7.2."
},
{
"text": "In terms of the number of segments per category, the output of the automatic segmentation systems agree with the manual segmentation process. The only exception is the number of English segments identified by the five-lingual system, which is higher than for the other systems. We believe that this is because the five-lingual language model was trained on more in-domain English text (Biswas et al., 2019) . The table also shows that including speaker diarization in the segmentation process produces smaller chunks of words than using only the VAD. Due to the varying duration of each set, comparisons are difficult to make. For this reason, the set VAD 2Sub is included, which is of a similar duration to AutoT NonE , allowing comparison between the nonexpert manual segmentation and automatic segmentation. It English 8 570 12 155 7 608 4 721 11 686 4 754 23 973 IsiZulu 5 955 4 084 3 583 2 065 7 995 2 122 7 315 IsiXhosa 302 154 116 57 443 236 831 Sesotho 1 317 2 267 1 695 759 3 457 719 1 can be seen that for the same duration of data, the automatic segmentation produces fewer segments.",
"cite_spans": [
{
"start": 385,
"end": 406,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 814,
"end": 1025,
"text": "English 8 570 12 155 7 608 4 721 11 686 4 754 23 973 IsiZulu 5 955 4 084 3 583 2 065 7 995 2 122 7 315 IsiXhosa 302 154 116 57 443 236 831 Sesotho 1 317 2 267 1 695 759 3 457 719 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Automatic Transcription",
"sec_num": "7.2."
},
{
"text": "The performance of the ASR systems introduced in Table 6 were measured in terms of the word error rate (WER) achieved after semi-supervised training. Results for the different training configurations are reported in Table 9 . The values in the table indicate that including the additional 23 hours of non-expert manually segmented data (AutoT NonE ) in the training set (System B) yields absolute improvements of 1.5% and 1.4% over the baseline (System A) for the development and test sets respectively. The results for System C show that using 83 hours of automatically-segmented speech results in an absolute improvement of 1.7% and 1.6% for the development and test sets relative to System A. Although System C's performance is on par with that of System B, its training set was much larger which means that the computational cost of developing the system is also much higher. According to Table 6 , VAD 2 reduced the additional training data from 83.6 to 47 hours. Furthermore, including this reduced additional data in System D's training set resulted in lower WERs for both the development and test sets, when compared with Systems A, B and C. The semi-supervised system trained on the 21-hour subset of VAD 2 (System E) achieved results that are comparable to those of System B. The two additional dataset seem to have had almost the same impact on the accuracy of the resulting acoustic models. This result seems to indicate that, in terms of ASR performance, manually and automaticallyproduced segmentations are equally well suited for system development. However, it should be kept in mind that the segment labels used by System B were not assigned by experts. System F, trained on the segments generated by VAD 3 , yielded better performance than system D, despite the fact that System D's training set contained 10 more hours of data. The improvement in WER was found to be statistically significant at the 95% confidence level using bootstrap interval estimation (Bisani and Ney, 2004) .",
"cite_spans": [
{
"start": 1976,
"end": 1998,
"text": "(Bisani and Ney, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 216,
"end": 223,
"text": "Table 9",
"ref_id": "TABREF16"
},
{
"start": 893,
"end": 900,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "7.3."
},
{
"text": "The results for System G show that automatic segments that do not take speaker identity into account (VAD 4 ) do not achieve the accuracy levels as those that do (System F). Therefore, the inclusion of speaker diarization does tend to improve ASR performance. The performance of System H (five-lingual baseline system) is included in Table 9 but should not be directly com- pared with the bilingual systems because the recognition task is inherently more complex. However, as has been observed before (Biswas et al., 2019) , the bilingual System I, trained on automatic transcriptions generated by System H, shows the best overall performance of all the evaluated systems. The improvement on the test set over its closest competitor (System F) is 0.5% absolute and this was found to be statistically significant above the 90% confidence level using bootstrap interval estimation. This improvement may be due to the ability of the five-lingual system to transcribe more than two languages, as well as Bantu-to-Bantu switches. The untranscribed soap opera speech is known to contain at least some segments that do not conform to the four considered bilingual language groupings. The degree to which such language switches do occur is unfortunately difficult to quantify without manual transcriptions. However, since the key difference between the bilingual and five-lingual systems is the ability to handle a greater variety of language switches, we speculate that this is a likely cause for the superior performance of System I. The improvement in WER achieved by the semi-supervised ASR systems incorporating different versions of the additional data is summarised in Figure 2 . The figure confirms that the largest gain in recognition accuracy was achieved by System I. It also affirms the observation that an equal amount of manually and automatically segmented Table 10 : Language specific WER (%) (lowest is best) for English (E), isiZulu (Z), isiXhosa (X), Sesotho (S), Setswana (T) and code-switched bigram correct (Bi CS ) (%) (highest is best) for the test set.",
"cite_spans": [
{
"start": 501,
"end": 522,
"text": "(Biswas et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 334,
"end": 341,
"text": "Table 9",
"ref_id": "TABREF16"
},
{
"start": 1668,
"end": 1676,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1864,
"end": 1872,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "7.3."
},
{
"text": "data yields an equal improvement in recognition accuracy in a semi-supervised set-up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "7.3."
},
{
"text": "For code-switched ASR, the performance of the recogniser at code-switch points is of particular interest. Language specific WERs and code-switched bigram correct (Bi CS ) values for the different semi-supervised systems are presented in Table 10 . Code-switch bigram correct is defined as the percentage of words correctly recognised immediately after code-switch points. All values are percentages. The table reveals that both the English and Bantu WERs for all the semi-supervised systems are substantially lower than the corresponding values for the baseline system. The accuracy at the code-switch points is also substantially higher for the semi-supervised systems. Hence, adding the additional training data enhances system performance at codeswitch points. Moreover, there are no substantial differences between the gains achieved by adding the manually (System B) or automatically (Systems F, I) segmented data.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Table 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Language Specific WER Analysis",
"sec_num": "7.4."
},
{
"text": "In this study, we have evaluated the impact of using automatically-segmented instead of manually-segmented speech data for semi-supervised training of a code-switched automatic speech recognition system. Four different automatic segmentation approaches were evaluated, based respectively on simple energy thresholding with diarization, a CNN classification with two variants of HMM smoothing and diarization, and CNN classification with GMM-HMM smoothing and no diarization. It was found that applying our new CNN-GMM-HMM based VAD followed by Xvector speaker diarization resulted in the best ASR performance. The results also showed that the performance of systems that used automatically and manually-segmented data were comparable. We conclude that automaticsegmentation in combination with semi-supervised training is a viable approach to enhancing the recognition accuracy of a challenging five-language code-switched speech recognition task. This is a very positive outcome, since the difficulty in providing a manual segmentation of new broadcast material has remained an impediment to the development of speech technology in severely under resourced settings such as the one we describe. Future work will focus on improving the VAD and speaker diarization techniques as well as incorporating language identification into the automatic segmentation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
}
],
"back_matter": [
{
"text": "We would like to thank the Department of Arts & Culture (DAC) of the South African government for funding this research. We are grateful to e.tv and Yula Quinn at Rhythm City, as well as the SABC and Human Stark at Generations: The Legacy, for assistance with data compilation. We also gratefully acknowledge the support of the NVIDIA corporation for the donation of GPU equipment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "9."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic speech recognition of code switching speech using 1-best rescoring",
"authors": [
{
"first": "B",
"middle": [
"H"
],
"last": "Ahmed",
"suffix": ""
},
{
"first": "T.-P",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. IALP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed, B. H. and Tan, T.-P. (2012). Automatic speech recognition of code switching speech using 1-best rescoring. In Proc. IALP, Hanoi, Vietnam.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Addressing code-switching in French/Algerian Arabic speech",
"authors": [
{
"first": "D",
"middle": [],
"last": "Amazouz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lamel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amazouz, D., Adda-Decker, M., and Lamel, L. (2017). Addressing code-switching in French/Algerian Arabic speech. In Proc. Interspeech, Stockhom, Sweden.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bootstrap estimates for confidence intervals in ASR performance evaluation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bisani, M. and Ney, H. (2004). Bootstrap estimates for confidence intervals in ASR performance evaluation. In Proc. ICASSP, Montreal, Canada.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual neural network acoustic modelling for ASR of under-resourced English-isiZulu code-switched speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Wet",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech, Hyderabad",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biswas, A., de Wet, F., van der Westhuizen, E., Y\u0131lmaz, E., and Niesler, T. R. (2018a). Multilingual neural network acoustic modelling for ASR of under-resourced English- isiZulu code-switched speech. In Proc. Interspeech, Hy- derabad, India.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving ASR for codeswitched speech in under-resourced languages using outof-domain data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Wet",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. SLTU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biswas, A., van der Westhuizen, E., Niesler, T. R., and de Wet, F. (2018b). Improving ASR for code- switched speech in under-resourced languages using out- of-domain data. In Proc. SLTU, Gurugram, India.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semi-supervised acoustic model training for five-lingual code-switched ASR",
"authors": [
{
"first": "A",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Wet",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biswas, A., Y\u0131lmaz, E., de Wet, F., van der Westhuizen, E., and Niesler, T. R. (2019). Semi-supervised acoustic model training for five-lingual code-switched ASR. In Proc. Interspeech, Graz, Austria.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semi-supervised development of ASR systems for multilingual code-switched speech in under-resourced languages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Biswas",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "De Wet",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biswas, A., Y\u0131lmaz, E., de Wet, F., van der Westhuizen, E., and Niesler, T. R. (2020). Semi-supervised development of ASR systems for multilingual code-switched speech in under-resourced languages. In Proc. LREC, Marseille, France.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Untranscribed web audio for low resource speech recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carmantini",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carmantini, A., Bell, P., and Renals, S. (2019). Untran- scribed web audio for low resource speech recognition. In Proc. Interspeech, Graz, Austria.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "AVA-speech: A densely labeled dataset of speech activity in movies",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Gallagher",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Kaver",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Marvin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pantofaru",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Reale",
"suffix": ""
},
{
"first": "L",
"middle": [
"G"
],
"last": "Reid",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaudhuri, S., Roth, J., Ellis, D., Gallagher, A. C., Kaver, L., Marvin, R., Pantofaru, C., Reale, N. C., Reid, L. G., Wilson, K., and Xi, Z. (2018). AVA-speech: A densely labeled dataset of speech activity in movies. In Proc. In- terspeech, Hyderabad, India.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Vox-celeb2: Deep speaker recognition",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Chung",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nagrani",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung, J. S., Nagrani, A., and Zisserman, A. (2018). Vox- celeb2: Deep speaker recognition. In Proc. Interspeech, Hyderabad, India.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transliteration based approaches to improve code-switched speech recognition performance",
"authors": [
{
"first": "J",
"middle": [],
"last": "Emond",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ramabhadran",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emond, J., Ramabhadran, B., Roark, B., Moreno, P., and Ma, M. (2018). Transliteration based approaches to im- prove code-switched speech recognition performance. In Proc. SLT, Athens, Greece.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "IITG-HingCoS corpus: A Hinglish code-switching database for automatic speech recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ganji",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Dhawan",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sinha",
"suffix": ""
}
],
"year": 2019,
"venue": "Speech Communication",
"volume": "110",
"issue": "",
"pages": "76--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganji, S., Dhawan, K., and Sinha, R. (2019). IITG- HingCoS corpus: A Hinglish code-switching database for automatic speech recognition. Speech Communica- tion, 110:76-89.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Audio set: An ontology and human-labeled dataset for audio events",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Gemmeke",
"suffix": ""
},
{
"first": "D",
"middle": [
"P W"
],
"last": "Ellis",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Freedman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Plakal",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gemmeke, J. F., Ellis, D. P. W., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., and Ritter, M. (2017). Audio set: An ontology and human-labeled dataset for audio events. In Proc. ICASSP, New Orleans, USA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Study of semi-supervised approaches to improving English-Mandarin code-switching speech recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "E",
"middle": [
"S"
],
"last": "Chng",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo, P., Xu, H., Xie, L., and Chng, E. S. (2018). Study of semi-supervised approaches to improving English- Mandarin code-switching speech recognition. In Proc. Interspeech, Hyderabad, India.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. CVPR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid- ual learning for image recognition. In Proc. CVPR, Las Vegas, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "CNN architectures for large-scale audio classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Hershey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "D",
"middle": [
"P W"
],
"last": "Ellis",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Gemmeke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Plakal",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Platt",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Saurous",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Seybold",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Slaney",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "Wilson",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hershey, S., Chaudhuri, S., Ellis, D. P. W., Gemmeke, J. F., Jansen, A., Moore, R. C., Plakal, M., Platt, D., Saurous, R. A., Seybold, B., Slaney, M., Weiss, R. J., and Wil- son, K. (2017). CNN architectures for large-scale audio classification. In Proc. ICASSP, New Orleans, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Audio augmentation for speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ko, T., Peddinti, V., Povey, D., and Khudanpur, S. (2015). Audio augmentation for speech recognition. In Proc. In- terspeech, Dresden, Germany.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Improved mixed language speech recognition using asymmetric acoustic model and language model with code-switch inversion constraints",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Y. and Fung, P. (2013). Improved mixed language speech recognition using asymmetric acoustic model and language model with code-switch inversion constraints. In Proc. ICASSP, Vancouver, Canada.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-supervised training of acoustic models using lattice-free MMI",
"authors": [
{
"first": "V",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hadian",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [
";"
],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Canada",
"middle": [],
"last": "Calgary",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nagrani",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Chung",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manohar, V., Hadian, H., Povey, D., and Khudanpur, S. (2018). Semi-supervised training of acoustic models us- ing lattice-free MMI. In Proc. ICASSP, Calgary, Canada. Nagrani, A., Chung, J. S., and Zisserman, A. (2017). Vox- celeb: a large-scale speaker identification dataset. In Proc. Interspeech, Stockholm, Sweden.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Speech chain for semi-supervised learning of Japanese-English code-switching ASR and TTS",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tjandra",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sakti",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakayama, S., Tjandra, A., Sakti, S., and Nakamura, S. (2018). Speech chain for semi-supervised learning of Japanese-English code-switching ASR and TTS. In Proc. SLT, Athens, Greece.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Phonetically balanced code-mixed speech corpus for Hindi-English automatic speech recognition",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "B",
"middle": [
"M L"
],
"last": "Srivastava",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "B",
"middle": [
"T"
],
"last": "Nellore",
"suffix": ""
},
{
"first": "K",
"middle": [
"S"
],
"last": "Teja",
"suffix": ""
},
{
"first": "S",
"middle": [
"V"
],
"last": "Gangashetty",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pandey, A., Srivastava, B. M. L., Kumar, R., Nellore, B. T., Teja, K. S., and Gangashetty, S. V. (2018). Phonetically balanced code-mixed speech corpus for Hindi-English automatic speech recognition. In Proc. LREC, Miyazaki, Japan.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. ASRU",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., et al. (2011). The Kaldi speech recogni- tion toolkit. In Proc. ASRU, Hawaii.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Purely sequence-trained neural networks for ASR based on lattice-free MMI",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Galvez",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ghahremani",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Manohar",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Peddinti, V., Galvez, D., Ghahremani, P., Manohar, V., Na, X., Wang, Y., and Khudanpur, S. (2016). Purely sequence-trained neural networks for ASR based on lattice-free MMI. In Proc. Interspeech, San Francisco, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semi-orthogonal low-rank matrix factorization for deep neural networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Cheng, G., Wang, Y., Li, K., Xu, H., Yarmoham- madi, M., and Khudanpur, S. (2018). Semi-orthogonal low-rank matrix factorization for deep neural networks. In Proc. Interspeech, Hyderabad, India.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A convolutional neural network smartphone app for real-time voice activity detection",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sehgal",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Kehtarnavaz",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Access",
"volume": "6",
"issue": "",
"pages": "9017--9026",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sehgal, A. and Kehtarnavaz, N. (2018). A convolutional neural network smartphone app for real-time voice ac- tivity detection. IEEE Access, 6:9017-9026.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "X-vectors: Robust DNN embeddings for speaker recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Snyder",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Garcia-Romero",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sell",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. (2018). X-vectors: Robust DNN embed- dings for speaker recognition. In Proc. ICASSP, Calgary, Canada.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "SRILM -An extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A. (2002). SRILM -An extensible language mod- eling toolkit. In Proc. ICSLP, Denver, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Exploiting monolingual speech corpora for code-mixed speech recognition",
"authors": [
{
"first": "K",
"middle": [],
"last": "Taneja",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Guha",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Jyothi",
"suffix": ""
},
{
"first": "Abraham",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taneja, K., Guha, S., Jyothi, P., and Abraham, B. (2019). Exploiting monolingual speech corpora for code-mixed speech recognition. In Proc. Interspeech, Graz, Austria.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep neural network features and semisupervised training for low resource speech recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Seltzer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hermansky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas, S., Seltzer, M. L., Church, K., and Hermansky, H. (2013). Deep neural network features and semi- supervised training for low resource speech recognition. In Proc. ICASSP, Vancouver, Canada.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Analyzing convolutional neural networks for speech activity detection in mismatched acoustic conditions",
"authors": [
{
"first": "S",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ganapathy",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Saon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Soltau",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Van Der Westhuizen",
"suffix": ""
},
{
"first": "T",
"middle": [
"R"
],
"last": "Niesler",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas, S., Ganapathy, S., Saon, G., and Soltau, H. (2014). Analyzing convolutional neural networks for speech activity detection in mismatched acoustic condi- tions. In Proc. ICASSP, Florence, Italy. van der Westhuizen, E. and Niesler, T. R. (2018). A first South African corpus of multilingual code-switched soap opera speech. In Proc. LREC, Miyazaki, Japan.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A first speech recognition system for Mandarin-English code-switch conversational speech",
"authors": [],
"year": null,
"venue": "Proc. ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A first speech recognition system for Mandarin-English code-switch conversational speech. In Proc. ICASSP, Kyoto, Japan.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The WebRTC project",
"authors": [
{
"first": "",
"middle": [
"Org"
],
"last": "Webrtc",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "WebRTC.org. (2011). The WebRTC project. [Online]. Available: https://webrtc.org.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Investigating bilingual deep neural networks for automatic recognition of code-switching Frisian speech",
"authors": [
{
"first": "E",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Van Den Heuvel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Van Leeuwen",
"suffix": ""
}
],
"year": 2016,
"venue": "Procedia Computer Science",
"volume": "81",
"issue": "",
"pages": "159--166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y\u0131lmaz, E., van den Heuvel, H., and van Leeuwen, D. (2016). Investigating bilingual deep neural networks for automatic recognition of code-switching Frisian speech. Procedia Computer Science, 81:159-166.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Semi-supervised acoustic model training for speech with code-switching",
"authors": [
{
"first": "E",
"middle": [],
"last": "Y\u0131lmaz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mclaren",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Van Den Heuvel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Van Leeuwen",
"suffix": ""
}
],
"year": 2018,
"venue": "Speech Communication",
"volume": "105",
"issue": "",
"pages": "12--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y\u0131lmaz, E., McLaren, M., van den Heuvel, H., and van Leeuwen, D. (2018). Semi-supervised acoustic model training for speech with code-switching. Speech Com- munication, 105:12-22.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "On the end-to-end solution to Mandarin-English code-switching speech recognition",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Khassanov",
"suffix": ""
},
{
"first": "V",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "E",
"middle": [
"S"
],
"last": "Chng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00241"
]
},
"num": null,
"urls": [],
"raw_text": "Zeng, Z., Khassanov, Y., Pham, V. T., Xu, H., Chng, E. S., and Li, H. (2018). On the end-to-end solution to Mandarin-English code-switching speech recognition. arXiv preprint arXiv:1811.00241.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Semi-supervised training framework for bilingual code-switch (CS) ASR. EZ, EX, ES and ET refer to Engish-isiZulu, English-isiXhosa, English-Sesotho and English-Setswana language pairs respectively.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Improvement (% in comparison with the baseline) in test set WER for different semi-supervised systems incorporating additional soap opera training data.",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"5\">: Duration in minutes (m) and hours (h) as well as</td></tr><tr><td colspan=\"6\">word type and token counts for the unbalanced training set.</td></tr><tr><td colspan=\"6\">An overview of the composition of the development (Dev)</td></tr><tr><td colspan=\"6\">and test (Test) sets for each language pair is given in Ta-</td></tr><tr><td colspan=\"6\">ble 2. The table includes values for the total duration as</td></tr><tr><td colspan=\"6\">well as the duration of the monolingual and code-switched</td></tr><tr><td colspan=\"6\">segments. The test sets contain no monolingual data and</td></tr><tr><td colspan=\"6\">a total of approximately 4 000 language switches (English-</td></tr><tr><td colspan=\"3\">to-Bantu and Bantu-to-English).</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"2\">English-isiZulu</td><td/><td/></tr><tr><td/><td>emdur</td><td>zmdur</td><td>ecdur</td><td>zcdur</td><td>Total</td></tr><tr><td>Dev</td><td>0.0</td><td>0.0</td><td>4.0</td><td>4.0</td><td>8.0</td></tr><tr><td>Test</td><td>0.0</td><td>0.0</td><td>12.8</td><td>17.9</td><td>30.4</td></tr><tr><td/><td/><td colspan=\"2\">English-isiXhosa</td><td/><td/></tr><tr><td/><td>emdur</td><td>xmdur</td><td>ecdur</td><td>xcdur</td><td>Total</td></tr><tr><td>Dev</td><td>2.9</td><td>6.5</td><td>2.2</td><td>2.1</td><td>13.7</td></tr><tr><td>Test</td><td>0.0</td><td>0.0</td><td>5.6</td><td>8.8</td><td>14.3</td></tr><tr><td/><td/><td colspan=\"2\">English-Setswana</td><td/><td/></tr><tr><td/><td>emdur</td><td>tmdur</td><td>ecdur</td><td>tcdur</td><td>Total</td></tr><tr><td>Dev</td><td>0.8</td><td>4.3</td><td>4.5</td><td>4.3</td><td>13.8</td></tr><tr><td>Test</td><td>0.0</td><td>0.0</td><td>8.9</td><td>9.0</td><td>17.8</td></tr><tr><td/><td/><td colspan=\"2\">English-Sesotho</td><td/><td/></tr><tr><td/><td>emdur</td><td>smdur</td><td>ecdur</td><td>scdur</td><td>Total</td></tr><tr><td>Dev</td><td>1.1</td><td>5.1</td><td>3.0</td><td>3.6</td><td>12.8</td></tr><tr><td>Test</td><td>0.0</td><td>0.0</td><td>7.8</td><td>7.7</td><td>15.5</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>: Duration (minutes) of English, isiZulu, isiXhosa,</td></tr><tr><td>Sesotho, Setswana monolingual (mdur) and code-switched</td></tr><tr><td>(cdur) segments in the code-switching development and test</td></tr><tr><td>sets.</td></tr><tr><td>2.2. Manually Segmented Automatically</td></tr><tr><td>Transcribed Data: Expert Segmentation</td></tr><tr><td>(AutoT Exp )</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"html": null,
"text": "The CNN architecture used in the VAD systems.",
"num": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table/>",
"html": null,
"text": "Datasets used for training and testing of automatic segmentation systems.",
"num": null,
"type_str": "table"
},
"TABREF8": {
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Training segments</td><td/></tr><tr><td colspan=\"2\">System Type</td><td>AutoT NonE (23h)</td><td>VAD 1 (83.6h)</td><td>VAD 2 (47h)</td><td>VAD 2Sub (20.9h)</td><td>VAD 3 (37.0h)</td><td>VAD 4 (45.63h)</td></tr><tr><td>A</td><td>Bilingual baseline</td><td/><td/><td/><td/><td/></tr><tr><td>B</td><td/><td/><td/><td/><td/><td/></tr><tr><td>C</td><td/><td/><td/><td/><td/><td/></tr><tr><td>D E</td><td>Bilingual system trained with AutoT A</td><td/><td/><td/><td/><td/></tr><tr><td>F</td><td/><td/><td/><td/><td/><td/></tr><tr><td>G</td><td/><td/><td/><td/><td/><td/></tr><tr><td>H</td><td>Five-lingual baseline</td><td/><td/><td/><td/><td/></tr><tr><td>I</td><td>Bilingual system trained with AutoT H</td><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "Development and test set perplexities. CPP: code-switch perplexity. MPP: monolingual perplexity.",
"num": null,
"type_str": "table"
},
"TABREF9": {
"content": "<table/>",
"html": null,
"text": "ASR systems trained on different versions of the automatically segmented data. The duration of each of these datasets is given in parentheses.",
"num": null,
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>: The true positive rate reported at a false posi-</td></tr><tr><td>tive rate of 0.315 for various VAD systems tested on AVA-</td></tr><tr><td>Speech. The first three systems are baselines from (Chaud-</td></tr><tr><td>huri et al., 2018), tested on the full dataset. The final three</td></tr><tr><td>systems are tested on a \u2248 23 hour test set split.</td></tr></table>",
"html": null,
"text": "",
"num": null,
"type_str": "table"
},
"TABREF14": {
"content": "<table/>",
"html": null,
"text": "Number of segments per language identified by the baseline bilingual (A) and baseline five-lingual (H) ASR systems for different segmentation approaches.",
"num": null,
"type_str": "table"
},
"TABREF16": {
"content": "<table><tr><td>System</td><td/><td colspan=\"2\">English-isiZulu</td><td colspan=\"3\">English-isiXhosa</td><td colspan=\"3\">English-Sesotho</td><td colspan=\"3\">English-Setswana</td></tr><tr><td/><td>E</td><td>Z</td><td>BiCS</td><td>E</td><td>X</td><td>BiCS</td><td>E</td><td>S</td><td>BiCS</td><td>E</td><td>T</td><td>BiCS</td></tr><tr><td>A (baseline)</td><td>37.9</td><td>48.7</td><td>33.3</td><td>37.8</td><td>54.5</td><td>25.8</td><td>43.7</td><td>61.4</td><td>25.2</td><td>36.2</td><td>51.8</td><td>35.6</td></tr><tr><td>B</td><td>32.3</td><td>45.2</td><td>36.8</td><td>32.7</td><td>49.1</td><td>32.1</td><td>32.9</td><td>57.2</td><td>33.7</td><td>28.1</td><td>48.3</td><td>40.5</td></tr><tr><td>F</td><td>31.6</td><td>43.9</td><td>37.8</td><td>31.5</td><td>48.3</td><td>34.2</td><td>32.5</td><td>56.8</td><td>33.8</td><td>27.4</td><td>46.4</td><td>42.2</td></tr><tr><td>I</td><td>31.7</td><td>43.7</td><td>37.3</td><td>31.6</td><td>47.6</td><td>34.4</td><td>32.0</td><td>56.4</td><td>34.2</td><td>26.8</td><td>45.7</td><td>42.0</td></tr></table>",
"html": null,
"text": "Mixed WERs (%) for the four code-switched language pairs.",
"num": null,
"type_str": "table"
}
}
}
}