ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.37.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:18.180362Z"
},
"title": "DNN-Based Multilingual Automatic Speech Recognition for Wolaytta using Oromo Speech",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Yifiru Tachbelie",
"suffix": "",
"affiliation": {
"laboratory": "Cognitive Systems Lab",
"institution": "University of Bremen",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Solomon",
"middle": [
"Teferra"
],
"last": "Abate",
"suffix": "",
"affiliation": {
"laboratory": "Cognitive Systems Lab",
"institution": "University of Bremen",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": "",
"affiliation": {
"laboratory": "Cognitive Systems Lab",
"institution": "University of Bremen",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It is known that Automatic Speech Recognition (ASR) is very useful for human-computer interaction in all the human languages. However, due to its requirement for a big speech corpus, which is very expensive, it has not been developed for most of the languages. Multilingual ASR (MLASR) has been suggested to share existing speech corpora among related languages to develop an ASR for languages which do not have the required speech corpora. Literature shows that phonetic relatedness goes across language families. We have, therefore, conducted experiments on MLASR taking two language families: one as source (Oromo from Cushitic) and the other as target (Wolaytta from Omotic). Using Oromo Deep Neural Network (DNN) based acoustic model, Wolaytta pronunciation dictionary and language model we have achieved Word Error Rate (WER) of 48.34% for Wolaytta. Moreover, our experiments show that adding only 30 minutes of speech data from the target language (Wolaytta) to the whole training data (22.8 hours) of the source language (Oromo) results in a relative WER reduction of 32.77%. Our results show the possibility of developing ASR system for a language, if we have pronunciation dictionary and language model, using an existing speech corpus of another language irrespective of their language family.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "It is known that Automatic Speech Recognition (ASR) is very useful for human-computer interaction in all the human languages. However, due to its requirement for a big speech corpus, which is very expensive, it has not been developed for most of the languages. Multilingual ASR (MLASR) has been suggested to share existing speech corpora among related languages to develop an ASR for languages which do not have the required speech corpora. Literature shows that phonetic relatedness goes across language families. We have, therefore, conducted experiments on MLASR taking two language families: one as source (Oromo from Cushitic) and the other as target (Wolaytta from Omotic). Using Oromo Deep Neural Network (DNN) based acoustic model, Wolaytta pronunciation dictionary and language model we have achieved Word Error Rate (WER) of 48.34% for Wolaytta. Moreover, our experiments show that adding only 30 minutes of speech data from the target language (Wolaytta) to the whole training data (22.8 hours) of the source language (Oromo) results in a relative WER reduction of 32.77%. Our results show the possibility of developing ASR system for a language, if we have pronunciation dictionary and language model, using an existing speech corpus of another language irrespective of their language family.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic Speech Recognition (ASR) is the automatic recognition and transcription of spoken language into text that can be used as text input for other systems such as information retrieval systems. Since speech is difficult to process directly in the human machine interaction, ASR technologies are important for all the human languages. As a result, a lot of research and development efforts have been exerted and lots of Automatic Speech Recognition Systems (ASRSs) have already been developed in a number of human languages. However, only insignificant number of the 7000 languages are considered. The main reason for the limited coverage of the human languages in the development of ASRSs is that to develop an ASRS for a new language and improve the performance of the existing ones depend on the availability of speech corpus in that particular language. We do not have such corpora for a significant number of human languages, which are known to be under-resourced languages (Besacier et al., 2014) . Almost all Ethiopian languages, such as Wolaytta, are under-resourced and belong to the language groups that are not benefiting from the development of spoken language technologies. To the best of our knowledge, there are only three works (Abate et al., 2020a; Tachbelie et al., 2020b; Abate et al., 2020b) towards the development of an ASRS for Oromo and Wolaytta that use at least a mediumsized speech corpora. Multilingual Automatic Speech Recognition (MLASR) has been suggested and lots of research is being conducted in this line to solve the problem of speech corpora for underresourced languages. MLASR system is described as a system that is able to recognize multiple languages which are presented during training (Schultz and Waibel, 2001) . (Vu et al., 2014) described MLASR as a system in which at least one of the components (feature extraction, acoustic model, pronunciation dictionary, or language model) is developed using data from many different languages.",
"cite_spans": [
{
"start": 983,
"end": 1006,
"text": "(Besacier et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 1248,
"end": 1269,
"text": "(Abate et al., 2020a;",
"ref_id": "BIBREF0"
},
{
"start": 1270,
"end": 1294,
"text": "Tachbelie et al., 2020b;",
"ref_id": "BIBREF24"
},
{
"start": 1295,
"end": 1315,
"text": "Abate et al., 2020b)",
"ref_id": "BIBREF1"
},
{
"start": 1732,
"end": 1758,
"text": "(Schultz and Waibel, 2001)",
"ref_id": "BIBREF19"
},
{
"start": 1761,
"end": 1778,
"text": "(Vu et al., 2014)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "MLASR systems are particularly interesting for underresourced languages where training data are sparse or not available at all (Schultz and Waibel, 2001) . Consequently, various researches in the area of MLASR (Weng et al., 1997; Schultz and Waibel, 1998; Schultz, 2002; Kanthak and Ney, 2003; Vu et al., 2014; M\u00fcller and Waibel, 2015; Chuangsuwanich, 2016) have been conducted and a lot others are being conducted for several language groups. Especially the development of artificial neural networks (ANNs) helped to achieve better performance in the development of MLASRSs (Heigold et al., 2013; Li et al., 2019) .",
"cite_spans": [
{
"start": 127,
"end": 153,
"text": "(Schultz and Waibel, 2001)",
"ref_id": "BIBREF19"
},
{
"start": 210,
"end": 229,
"text": "(Weng et al., 1997;",
"ref_id": "BIBREF26"
},
{
"start": 230,
"end": 255,
"text": "Schultz and Waibel, 1998;",
"ref_id": "BIBREF18"
},
{
"start": 256,
"end": 270,
"text": "Schultz, 2002;",
"ref_id": "BIBREF21"
},
{
"start": 271,
"end": 293,
"text": "Kanthak and Ney, 2003;",
"ref_id": "BIBREF11"
},
{
"start": 294,
"end": 310,
"text": "Vu et al., 2014;",
"ref_id": "BIBREF25"
},
{
"start": 311,
"end": 335,
"text": "M\u00fcller and Waibel, 2015;",
"ref_id": "BIBREF14"
},
{
"start": 336,
"end": 357,
"text": "Chuangsuwanich, 2016)",
"ref_id": "BIBREF4"
},
{
"start": 575,
"end": 597,
"text": "(Heigold et al., 2013;",
"ref_id": "BIBREF9"
},
{
"start": 598,
"end": 614,
"text": "Li et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In our previous work (Tachbelie et al., 2020a) , in which we have analyzed the similarities among GlobalPhone (Schultz et al., 2013) and Ethiopian languages (Amharic and Tigrigna from Semitic, Oromo from Cushitic and Wolaytta from Omotic), we have learned that there is high phonetic overlap among Ethiopian languages. The fact that these languages have shared phonological features is indicated in (Gutman and Avanzati, 2013) as well. From our analysis, we have learned that similarity among languages measured using their phonetic overlap crosses the boundaries of language families. Specifically, we have observed that although Oromo and Wolaytta are from different language families, there exists higher phone overlap between them than the other languages (Amharic and Tigrigna). This may be due to their geographical proximity. (Crass and Meyer, 2009) also indicated that Ethiopian languages, regardless of their language families, display areal patterns by sharing a number of similarities. Our analysis showed that 97.3% of Wolaytta phones are covered by the Oromo language while 92.3% of Oromo phones are covered by Wolaytta. Although both languages are underresourced, Oromo is in a relatively better position than Wolaytta. There are also a lot of other Ethiopian languages (more than 70) that are in similar or worse condition than Wolaytta with respect to language and speech resources. We wanted, therefore, to investigate the use of existing language resources to develop ASR for other Ethiopian languages. As a proof of concept, we investigated the development of Wolaytta (target language) ASR using Oromo (source language) training speech. In this work, we present the results of different experiments we have conducted to explore the benefit we gain from MLASR approach for two languages from two different language families. First, we have conducted a crosslingual ASR experiment where we decoded Wolaytta test speech using Oromo acoustic model (which is developed using Oromo training speech), Wolaytta language and lexical models. Second, we have developed Wolaytta ASR systems using various sizes of Wolaytta training speech (ranging from 30 minutes to 29 hours) with and without the whole amount of Oromo training speech (22.8). We have also conducted experiments to see if the source language (Oromo) can benefit from sharing training speech of the target language (Wolaytta) to improve the performance of the ASRSs. In the following section 1.1., we give a brief description on the application of deep neural networks for the development of ASRSs. In section 2., we describe the languages considered in this paper. The speech corpora we used for the research are described in section 3. The development of the monolingual ASR using different sizes of Wolaytta training speech, which are our baseline systems, and the results achieved by the use of MLASR approach for Wolaytta using Oromo training speech are presented in section 4. Finally in section 5., we give conclusions and forward future directions.",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Tachbelie et al., 2020a)",
"ref_id": "BIBREF23"
},
{
"start": 110,
"end": 132,
"text": "(Schultz et al., 2013)",
"ref_id": "BIBREF20"
},
{
"start": 399,
"end": 426,
"text": "(Gutman and Avanzati, 2013)",
"ref_id": "BIBREF7"
},
{
"start": 833,
"end": 856,
"text": "(Crass and Meyer, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Over the last 10 years, DNNs methods for ASR were developed and outperform the traditional Gaussian Mixture Model (HMM-GMM). The major factors for their superior performance are the availability of GPUs and the introduction of different types of neural network architectures such as Convolutional Neural networks (CNN) and more recently Time Delay Neural Networks (TDNN) and Factored TDNN (TDNNf). Since 2009, DNNs are widely used in automatic speech recognition and they presented dramatic improvement in performance. Numerous studies showed hybrid HMM-DNN systems outperform the dominant HMM-GMM on the same data (Hinton et al., 2012) . Currently, TDNNs, also called one-dimensional Convolutional Neural Networks, are an efficient and well-performing neural network architectures for ASR (Peddinti et al., 2015) . TDNN has the ability to learn long term temporal contexts. Moreover, by using singular value decomposition (SVD) the number of parameters in TDNN models is reduced which makes them inexpensive compared to RNNs. The factored form of TDNNs (TDNNf) (Povey et al., 2018) has similar structure with TDNN, but is trained from a random start with one of the two factors of each matrix constrained to be semi-orthogonal. TDNNf gives substantial improvement over TDNN and has been shown to be effective in underresourced scenarios. We have used these state-of-the-art neural network architecture in the development of DNN based ASR systems for the Ethiopian languages.",
"cite_spans": [
{
"start": 615,
"end": 636,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF10"
},
{
"start": 790,
"end": 813,
"text": "(Peddinti et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1062,
"end": 1082,
"text": "(Povey et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Deep Neural Networks in ASR",
"sec_num": "1.1."
},
{
"text": "More than 80 languages are spoken in Ethiopia. Ethiopian languages are divided into four major language families: Semitic, Cushitic, Omotic and Nilo-Saharan. The Semitic language family is one of the most widespread language families (with more than 20 languages) in the country. Of which, Amharic (spoken by 29.3% of the total Ethiopian population) and Tigrigna (spoken by 5.9% of the total Ethiopian population) are the most spoken languages. The Cushitic language family has also a long list of (about 22) languages spoken in Ethiopia. Amongst them, Oromo is the most widely spoken language in the country (spoken by 33.8% of the total Ethiopian population). The Omotic family has a large number of (more than 30) languages spoken in Ethiopia, one of which is Wolaytta (spoken by 2.2% of the total Ethiopian population) (CSAE, 2010). The Cushitic and Omotic language families use Latin script for writing. In both the languages the current writers differentiate the gemminated and the non-gemminated consonants. Similarly, long and short vowels are indicated in their writing system. Having a newly developed speech corpora (Abate et al., 2020a) for Oromo (a Cushitic language) and Wolaytta (an Omotic language), we have selected these languages to explore the application of MLASR development approach in the ANN framework.",
"cite_spans": [
{
"start": 1127,
"end": 1148,
"text": "(Abate et al., 2020a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oromo and Wolaytta",
"sec_num": "2."
},
{
"text": "Although they belong to different language families, Oromo and Wolaytta share several phonetic properties including the use of long and short vowels. These languages have five similar vowels and each of the vowels in both languages has long and short variants. Having their own inventory of consonants, Oromo and Wolaytta share a number of them (see Table 1 ). Of course, each of the languages has its own consonants. For instance, phones \u00f1 and x are used in Oromo but not in Wolaytta while phone Z is used in Wolaytta but not in Oromo. Almost all the consonants of these languages occur in both single and gemminated forms. The other common phonetic feature of these languages is the use of tones which makes both of them tonal languages. However, in this study we did not differentiate between vowels of different tones since the writing system does not show the tones of the vowels and the pronunciation dictionaries for our study have been generated automatically from the text. -Mewis, 2001 ). Unlike the Semitic languages, which allow prefixing, Oromo and Wolaytta are suffixing languages. In these languages words can be generated from stems recursively by adding suffixes only.",
"cite_spans": [
{
"start": 983,
"end": 995,
"text": "-Mewis, 2001",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 350,
"end": 357,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Phonology",
"sec_num": "2.1."
},
{
"text": "It is known that the Ethiopian languages, specially Oromo and Wolaytta are under-resourced. As a result, all of the previous works conducted towards the development of ASRSs for these languages are based on limited amounts of speech data. It is only recently that a work on the development of four standard medium-sized read speech corpora (Abate et al., 2020a) has been conducted for four Ethiopian languages including Oromo and Wolaytta. For a country like Ethiopia with more than 80 languages, unless a technological solution is used, it looks hopeless to have equivalent speech corpora for all its languages. In this work, we have used the existing speech corpora of Oromo (Abate et al., 2020a) to find out a solution for the development of an ASRS for an under-resourced language, Wolaytta. We considered Oromo as a source and Wolaytta as a target language considering the fact that there are more previous works conducted for Oromo, such as (Gelana, 2016; Gutu, 2016 ) than what we have for Wolaytta. We hope that our findings will be extended to solve the problems of the other Ethiopian languages that fall under four different language families.",
"cite_spans": [
{
"start": 340,
"end": 361,
"text": "(Abate et al., 2020a)",
"ref_id": "BIBREF0"
},
{
"start": 677,
"end": 698,
"text": "(Abate et al., 2020a)",
"ref_id": "BIBREF0"
},
{
"start": 947,
"end": 961,
"text": "(Gelana, 2016;",
"ref_id": "BIBREF5"
},
{
"start": 962,
"end": 972,
"text": "Gutu, 2016",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Speech Corpora",
"sec_num": "3."
},
{
"text": "Although the aim of our current work is to explore the development of MLASR for Wolaytta as a target language using Oromo training speech (as a source language), we have developed different monolingual GMM-and DNNbased ASRSs for Wolaytta using different sizes of Wolaytta speech corpus for comparison purposes. The description of the procedures we followed is presented in sub-section 4.1.1..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development of ASR Systems for Wolaytta",
"sec_num": "4.1."
},
{
"text": "To build reference AMs that use different sizes of training speech, we have splitted the Wolaytta training speech into 11 clusters: with 30 minutes, 1, 2, 4, 6, 8, 10, 15, 20, 25 and 29 (all) hours of speech length. We have selected roughly equal number of utterances from each speaker randomly for each of these clusters. Each of them has been used to train different AMs. All the AMs have been built in a similar fashion using Kaldi ASR toolkit (Povey et al., 2011) . We have built context dependent HMM-GMM based AM using 39 dimensional mel-frequency cepstral coefficients (MFCCs) to each of which cepstral mean and variance normalization (CMVN) is applied. The AM uses a fully-continuous 3state left-to-right HMM. Then we did Linear Discriminant Analysis (LDA) and Maximum Likelihood Linear Transform (MLLT) feature transformation for each of the models. Then Speaker Adaptive Training (SAT) has been done using an offline transform, feature space Maximum Likelihood Linear Regression (fMLLR). We did tuning to find the best number of states and Gaussians for different sizes of the training data.",
"cite_spans": [
{
"start": 447,
"end": 467,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic, Lexical and Language Models",
"sec_num": "4.1.1."
},
{
"text": "To train the DNN-based AMs, we have used the best HMM-GMM models to get alignments and the same training speech used to train HMM-GMM models. But we have applied a three-fold data augmentation (Ko et al., 2015) prior to the extraction of 40-dimensional MFCCs without derivatives, 3-dimensional pitch features and 100-dimensional ivectors for speaker adaptation. The neural network architecture we used is Factored Time Delay Neural Networks with additional Convolutional layers (CNN-TDNNf) according to the standard Kaldi WSJ recipe. The Neural network has 15 hidden layers (6 CNN followed by 9 TDNNf) and a rank reduction layer. The number of units in the TDDNf consists of 1024 and 128 bottleneck units except for the TDNNf layer immediately following the CNN layers which has 256 bottleneck units. The list of word entries both for training and decoding lexicons have been extracted from the training speech transcription in both the source and target languages. Using the nature of writing system that indicates gemminated and non-gemminated consonants as well as the long and short vowels, we have generated the pronunciation of these words automatically. However, since the tones are not indicated in written form of both languages, we did not consider tones in the current pronunciation dictionaries. For the development of the LMs we have used the text used in (Abate et al., 2020a) . We have developed trigram LMs using the SRILM toolkit (Stolcke, 2002) . The LMs are smoothed with unmodified Kneser-Ney smoothing techniques (Chen and Goodman, 1996) and made open by including a special unknown word token. LM probabilities are computed for the lexicon of the training transcription.",
"cite_spans": [
{
"start": 193,
"end": 210,
"text": "(Ko et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 1369,
"end": 1390,
"text": "(Abate et al., 2020a)",
"ref_id": "BIBREF0"
},
{
"start": 1447,
"end": 1462,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF22"
},
{
"start": 1534,
"end": 1558,
"text": "(Chen and Goodman, 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic, Lexical and Language Models",
"sec_num": "4.1.1."
},
{
"text": "We have evaluated all AMs trained with different sizes of training speech using the same test set (1:45 hours of speech recorded from four speakers who read a total of 578 utterances), pronunciation dictionary and language model. The performance of the systems is given in Figure 1 . These results are our reference points or baselines for the results achieved by using only the source language, and combined with different amounts of target language's training speech.",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 281,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.1.2."
},
{
"text": "As we can observe from Figure 1 , obviously, the WER reduces with the additional training speech in almost all the AMs. The DNN-based systems outperform the HMM-GMM-based ones regardless of the size of the training speech, except for 30 minutes. The DNN-based AMs has brought a relative WER reductions that range from 9.03% (with 1 hour) to 31.45% (with all the training speech).",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.1.2."
},
{
"text": "The best system developed using all the available training speech has achieved a WER of 23.23% with the DNNbased AM. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.1.2."
},
{
"text": "First we have decoded the Wolaytta evaluation test speech using a DNN-based Oromo AM (trained using all the training speech of the Oromo corpus), Wolaytta pronunciation dictionary and Wolaytta language model and achieved a WER of 48.34%. For this purpose we needed to map the Wolaytta phones that are not found in Oromo to the nearest possible Oromo phones (see Table 2 ). We have, then, conducted experiments to see the benefits it gets from additional Wolaytta speech incrementally starting from 30 minutes to the whole training speech. The evaluation of all the systems is done using the same evaluation set. The results are presented in Figure 2 . The results in Figure 2 show that performance improvement can be obtained by adding training speech from the target language. As we add more and more training speech from the target language, the improvement in performance reduces. A relative WER reduction of 32.77% has been achieved as a result of adding only 30 minutes of training speech from the target language. That means the WER we could achieve by using only the source language's training speech has been reduced from 48.34% to 32.5% by adding only 30 minutes training speech of the target language that is randomly selected from all the speakers (76) of the target language.",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 641,
"end": 649,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 667,
"end": 675,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Use of Oromo Speech for Wolaytta ASR",
"sec_num": "4.2."
},
{
"text": "Our results also show that instead of using only small amount of monolingual training speech in the development of an ASRS, specially in the DNN framework, the use of speech data from other related languages bring performance improvement. We have presented this improvement in Figure 3 that shows the comparison of WERs of ASRSs developed using Wolaytta training speech only and that of the ASRSs developed using different sizes of training speech from Wolaytta combined with all (22.8 hours) Oromo training speech. As it can be seen from the Figure, by adding only 30 minutes of Wolaytta training speech to all of the Oromo training speech, we have achieved a relative WER reduction of 33.55% and 5.52% when 25 hours of Wolaytta training speech is added. ",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 285,
"text": "Figure 3",
"ref_id": "FIGREF4"
},
{
"start": 543,
"end": 550,
"text": "Figure,",
"ref_id": null
}
],
"eq_spans": [],
"section": "Use of Oromo Speech for Wolaytta ASR",
"sec_num": "4.2."
},
{
"text": "We have decoded Oromo test set using the acoustic models (Wolaytta only AM and MLASR AMs) discussed in the previous sections, Oromo pronunciation dictionary and Oromo language model developed by (Abate et al., 2020a) . The results presented in Figure 4 show that we have achieved a WER of 49.25% using the DNN-based AM developed using 29.7 hours of Wolaytta training speech. The performance of MLASR systems on Oromo test set brought slight WER reductions compared to the best WER obtained from a system that is developed using Oromo training speech only. The relative WER reductions we have obtained range from 1.27% (gained from the addition of 10 hours of Wolaytta speech) to 3.31% (gained from the addition of 25 hours of Wolaytta speech). We could observe that adding 30 minutes to 8 hours of Wolaytta training speech has negatively affected Oromo ASR. ",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "(Abate et al., 2020a)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Evaluation of Multilingual Acoustic Models for Oromo",
"sec_num": "4.3."
},
{
"text": "In this paper, we have presented the experiments conducted on the development of multilingual ASRs across language families taking Oromo and Wolaytta as source and target languages, respectively. We have achieved a WER of 48.34% for Wolaytta without any training speech from it. By adding only 30 minutes of speech data from Wolaytta to the whole training data of the source language (Oromo) we have achieved a relative WER reduction of 32.77%. The ASRSs developed using all the training speech (22.8 hours) of the source language together with different sizes of training speech from the target language outperformed the ASRSs developed using training speech of the respective size from the target language only. The observed relative WER reductions range from 33.55% (achieved when training speech of Oromo plus only 30 minutes of Wolaytta is used) to 5.52% (achieved when training speech of Oromo plus 25 hours of Wolaytta is used). Based on our results, we conclude that it is possible to develop an ASRS with reasonable performance for a language using speech data of another language, irrespective of its language family, provided that we have a decoding pronunciation dictionary and a language model. We, therefore, recommend the development of a decoding pronunciation dictionary and a language model for the other Ethiopian languages so that they can benefit from the development of MLASRSs using the speech corpora of other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Way Forward",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "We would like to express our gratitude to the Alexander von Humboldt Foundation for funding the research stay at the Cognitive Systems Lab (CSL) of the University of Bremen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Large vocabulary read speech corpora for four ethiopian languages : Amharic, tigrigna, oromo and wolaytta",
"authors": [
{
"first": "S",
"middle": [
"T"
],
"last": "Abate",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Tachbelie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Melese",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Abera",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Abebe",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Mulugeta",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Assabie",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meshesha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Atinafu",
"suffix": ""
},
{
"first": "Ephrem",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "LREC 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abate, S. T., Tachbelie, M. Y., Melese, M., Abera, H., Abebe, T., Mulugeta, W., Assabie, Y., Meshesha, M., Atinafu, S., and Ephrem, B. (2020a). Large vocabu- lary read speech corpora for four ethiopian languages : Amharic, tigrigna, oromo and wolaytta. In LREC 2020.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deep neural networks based automatic speech recognition for four ethiopian languages",
"authors": [
{
"first": "S",
"middle": [
"T"
],
"last": "Abate",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Tachbelie",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abate, S. T., Tachbelie, M. Y., and Schultz, T. (2020b). Deep neural networks based automatic speech recogni- tion for four ethiopian languages. In ICASSP 2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic speech recognition for underresourced languages: A survey",
"authors": [
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Barnard",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Karpov",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2014,
"venue": "Speech Communication",
"volume": "56",
"issue": "",
"pages": "85--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Besacier, L., Barnard, E., Karpov, A., and Schultz, T. (2014). Automatic speech recognition for under- resourced languages: A survey. Speech Communication, 56:85-100.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An empirical study of smoothing techniques for language modeling",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1996,
"venue": "34th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "310--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, S. F. and Goodman, J. (1996). An empirical study of smoothing techniques for language modeling. In 34th Annual Meeting of the Association for Computa- tional Linguistics, pages 310-318, Santa Cruz, Califor- nia, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual techniques for low resource automatic speech recognition",
"authors": [
{
"first": "E",
"middle": [],
"last": "Chuangsuwanich",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Meyer",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuangsuwanich, E. (2016). Multilingual techniques for low resource automatic speech recognition. Ph.D. thesis. Crass, J. and Meyer, R. (2009). Introduction. R\u00fcdiger K\u00f6ppe Verlag, K\u00f6ln.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Large Vocabulary, Speaker-Independent, Continuous Speech Recognition System for Afaan Oromo: Using Broadcast News Speech Corpus",
"authors": [
{
"first": "Csae",
"middle": [],
"last": "Gelana",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CSAE. (2010). The 2007 population and housing census. Gelana, K. (2016). A Large Vocabulary, Speaker- Independent, Continuous Speech Recognition System for Afaan Oromo: Using Broadcast News Speech Corpus. Ph.D. thesis, Addis Ababa University.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Grammatical Sketch of Written Oromo",
"authors": [
{
"first": "C",
"middle": [],
"last": "Griefenow-Mewis",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Griefenow-Mewis, C. (2001). A Grammatical Sketch of Written Oromo.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Languages of ethiopia and eritrea",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gutman",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Avanzati",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gutman, A. and Avanzati, B. (2013). Languages of ethiopia and eritrea.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Continuous, Speaker Independent Speech Recognizer for Afaan Oroomoo: Afaan Oroomoo Speech Recognition Using HMM Model",
"authors": [
{
"first": "Y",
"middle": [
"G"
],
"last": "Gutu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gutu, Y. G. (2016). A Continuous, Speaker Independent Speech Recognizer for Afaan Oroomoo: Afaan Oroomoo Speech Recognition Using HMM Model. Ph.D. thesis, Addis Ababa University.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multilingual acoustic models using distributed deep neural networks",
"authors": [
{
"first": "G",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "8619--8623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heigold, G., Vanhoucke, V., Senior, A., Nguyen, P., Ran- zato, M., Devin, M., and Dean, J. (2013). Multilingual acoustic models using distributed deep neural networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8619-8623.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep neural networks for acoustic modeling in speech recognition",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Dahl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Signal processing magazine",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Kings- bury, B., et al. (2012). Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multilingual acoustic modeling using graphemes",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kanthak",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1145--1148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanthak, S. and Ney, H. (2003). Multilingual acoustic modeling using graphemes. In Proceedings of European Conference on Speech Communication and Technology, pages 1145-1148.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Audio augmentation for speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ko, T., Peddinti, V., Povey, D., and Khudanpur, S. (2015). Audio augmentation for speech recognition. In INTER- SPEECH.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multilingual speech recognition with corpus relatedness sampling",
"authors": [
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Dalmia",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Metze",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, X., Dalmia, S., Black, A., and Metze, F. (2019). Multi- lingual speech recognition with corpus relatedness sam- pling, 08.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Using language adaptive deep neural networks for improved multilingual speech recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Waibel",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00fcller, M. and Waibel, A. H. (2015). Using language adaptive deep neural networks for improved multilingual speech recognition.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
"authors": [
{
"first": "V",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peddinti, V., Povey, D., and Khudanpur, S. (2015). A time delay neural network architecture for efficient modeling of long temporal contexts. In Sixteenth Annual Confer- ence of the International Speech Communication Associ- ation.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., and Vesely, K. (2011). The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society, De- cember. IEEE Catalog No.: CFP11SRW-USB.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semi-orthogonal low-rank matrix factorization for deep neural networks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Yarmohammadi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2018,
"venue": "In Interspeech",
"volume": "",
"issue": "",
"pages": "3743--3747",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Cheng, G., Wang, Y., Li, K., Xu, H., Yarmoham- madi, M., and Khudanpur, S. (2018). Semi-orthogonal low-rank matrix factorization for deep neural networks. In Interspeech, pages 3743-3747.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual and crosslingual speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. DARPA Workshop on Broadcast News Transcription and Understanding",
"volume": "",
"issue": "",
"pages": "259--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T. and Waibel, A. (1998). Multilingual and crosslingual speech recognition. In Proc. DARPA Work- shop on Broadcast News Transcription and Understand- ing, pages 259-262.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Language-independent and language-adaptive acoustic modeling for speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2001,
"venue": "Speech Commun",
"volume": "35",
"issue": "1-2",
"pages": "31--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T. and Waibel, A. (2001). Language-independent and language-adaptive acoustic modeling for speech recognition. Speech Commun., 35(1-2):31-51, August.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Globalphone: A multilingual text and speech database in 20 languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "N",
"middle": [
"T"
],
"last": "Vu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schlippe",
"suffix": ""
}
],
"year": 2013,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T., Vu, N. T., and Schlippe, T. (2013). Global- phone: A multilingual text and speech database in 20 languages. In ICASSP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Globalphone: a multilingual speech and text database developed at karlsruhe university",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T. (2002). Globalphone: a multilingual speech and text database developed at karlsruhe university. In John H. L. Hansen et al., editors, INTERSPEECH. ISCA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Srilm -an extensible language modeling toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 7th International Conference on Spoken Language Processing (ICSLP) 2002",
"volume": "",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A. (2002). Srilm -an extensible language model- ing toolkit. In Proceedings of the 7th International Con- ference on Spoken Language Processing (ICSLP) 2002, pages 901-904.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Analysis of globalphone and ethiopian languages speech corpora for multilingual asr",
"authors": [
{
"first": "M",
"middle": [
"Y"
],
"last": "Tachbelie",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Abate",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tachbelie, M. Y., Abate, S. T., and Schultz, T. (2020a). Analysis of globalphone and ethiopian languages speech corpora for multilingual asr. In LREC 2020.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Dnn-based speech recognition for globalphone languages",
"authors": [
{
"first": "M",
"middle": [
"Y"
],
"last": "Tachbelie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Abulimiti",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Abate",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tachbelie, M. Y., Abulimiti, A., Abate, S. T., and Schultz, T. (2020b). Dnn-based speech recognition for global- phone languages. In ICASSP 2020.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multilingual deep neural network based acoustic modeling for rapid language adaptation",
"authors": [
{
"first": "N",
"middle": [
"T"
],
"last": "Vu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Imseng",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motl\u00edcek",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bourlard",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7639--7643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vu, N. T., Imseng, D., Povey, D., Motl\u00edcek, P., Schultz, T., and Bourlard, H. (2014). Multilingual deep neural net- work based acoustic modeling for rapid language adapta- tion. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7639- 7643.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A study of multilingual speech recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bratt",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Neumeyer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weng, F., Bratt, H., Neumeyer, L., and Stolcke, A. (1997). A study of multilingual speech recognition. In EU- ROSPEECH.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Language Consonants (IPA) Vowels (IPA) Oromo b d \u00e2 f g h j k k' l m n \u00f1 p p' r s a e i o u S t t' tS tS' \u00c3 v w x z P a: e: i: o: u: Wolaytta b d \u00e2 f g h j k k' l m n p p' r s a e i o u S t t' tS tS' \u00c3 w z Z P a: e: i: o: u:",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Wolaytta WERs with different sizes of Wolaytta training speech",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Wolaytta WERs with different sizes of Wolaytta training speech added to the whole training speech of Oromo",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Wolaytta WERs with different sizes of Wolaytta with and without the Oromo speech",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Oromo WERs with different sizes of Wolaytta training speech with and without the Oromo speech",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "Oromo and Wolaytta phones2.2. MorphologyReflecting the morphological nature of their language families, Oromo and Wolaytta are not as simple as English and not as complex as the Semitic language families. In both Oromo and Wolaytta nominals are inflected for number, gender, case and definiteness and verbs are inflected for person, number, gender, tense, aspect and mood (Griefenow",
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF1": {
"html": null,
"text": "Wolaytta phones mapped to Oromo phones",
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}