ACL-OCL / Base_JSON /prefixR /json /rocling /2020.rocling-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:55:00.647480Z"
},
"title": "Taiwanese Speech Recognition Based on Hybrid Deep Neural Network Architecture",
"authors": [
{
"first": "Yu-Fu",
"middle": [],
"last": "Yeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cheng Kung University Tainan",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Bo-Hao",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cheng Kung University Tainan",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Yang-Yen",
"middle": [],
"last": "Ou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cheng Kung University Tainan",
"location": {
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Jhing-Fa",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cheng Kung University Tainan",
"location": {
"country": "Taiwan"
}
},
"email": "[email protected]"
},
{
"first": "An-Chao",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tajen University",
"location": {
"settlement": "Pingtung",
"country": "Taiwan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this research, we developed the Taiwanese speech recognition system which used the Kaldi toolkit to implement. The Taiwanese corpus was collected by Taiwan Taiwanese National Reading Competition and Classmate Recording, and a total of about 11 hours of audio files were collected. Because the training data is small dataset, two audio augmentation methods are used to increase the training data, so that the acoustic model can be more robust and more effective training. One method is speed perturbation, which speeds up the original data by 1.1 times and slows it down by 0.9 times. Another method is to use multi-condition training data to simulate reverberation of the original speech and add background noise. The background noise includes music, speech, and noise. The acoustic model is trained for different hybrid deep neural network architectures which can use the advantages of each neural network by hybrid different neural networks, including TDNN, CNN-TDNN and CNN-LSTM-TDNN. In the experimental results, the CER in the domain of language modeling reaches 3.95%, and the CER of online decoding test is 3.06%. Compared with other researches on Taiwanese speech recognition of similar dataset size, the recognition results are better than other studies.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this research, we developed the Taiwanese speech recognition system which used the Kaldi toolkit to implement. The Taiwanese corpus was collected by Taiwan Taiwanese National Reading Competition and Classmate Recording, and a total of about 11 hours of audio files were collected. Because the training data is small dataset, two audio augmentation methods are used to increase the training data, so that the acoustic model can be more robust and more effective training. One method is speed perturbation, which speeds up the original data by 1.1 times and slows it down by 0.9 times. Another method is to use multi-condition training data to simulate reverberation of the original speech and add background noise. The background noise includes music, speech, and noise. The acoustic model is trained for different hybrid deep neural network architectures which can use the advantages of each neural network by hybrid different neural networks, including TDNN, CNN-TDNN and CNN-LSTM-TDNN. In the experimental results, the CER in the domain of language modeling reaches 3.95%, and the CER of online decoding test is 3.06%. Compared with other researches on Taiwanese speech recognition of similar dataset size, the recognition results are better than other studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the past few years, more and more products using speech recognition technology. Because these speech recognition applications make people's lives more and more convenient, no longer need to type to allow the machine to receive our message input. Taiwanese language is one of the commonly used languages of Taiwanese. From [1] , we can know that in 2013, the social change survey results showed that 31.4% of the people in the family spoke Mandarin Chinese most often, and 44.2% of them spoke Taiwanese most often. 19.5% of the people advocate using both Mandarin and Taiwanese, but the proportion of the older generation is much larger than that of the younger generation, which shows that Taiwanese is still the main language for the elderly. Most of them are learning by word of mouth, which leads to relatively scarce resources in Taiwanese. It makes the research of Taiwanese-related technologies much more difficult, and also causes people who speak Taiwanese to not enjoy these conveniences.",
"cite_spans": [
{
"start": 325,
"end": 328,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Therefore, we have established a Taiwanese dataset and a Taiwanese speech recognition system for this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Establishes a deep neural network architecture in kaldi [2] , the input features used in addition to the Mel frequency cepstral coefficients [3] will also concatenate the ivector feature [4] , which is a feature vector that can represent the speaker. First, a general background model is trained on the data of all speakers. The universal background model(UBM) is a Gaussian mixture model containing many components, and then the UBM is modified with the speech features of different speakers to achieve the speaker adaptation model, and the expected values of each Gaussian component are concatenated to form a GMM super-vectors. A section of GMM supervectors can be used to represent the feature vector of a speaker. Finally, the GMM super-Vectors of the general background model are related to the speaker. GMM super-Vectors calculates ivectors.",
"cite_spans": [
{
"start": 56,
"end": 59,
"text": "[2]",
"ref_id": "BIBREF2"
},
{
"start": 141,
"end": 144,
"text": "[3]",
"ref_id": "BIBREF3"
},
{
"start": 187,
"end": 190,
"text": "[4]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "In recent years, more and more DNNs have been used to replace GMM to increase the modeling capabilities of acoustic models, indicating that DNN-HMM is better than traditional GMM-HMM, and Kaldi continues to update the DNN architecture to build acoustic models, such as The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing for speech recognition [7] . For TDNN, increasing the number of layers allows the network to capture features for a longer period of time; usually it is desirable to deepen the number of network layers of TDNN to achieve better results. However, previous experiments have found that the deeper the network is, the more often the problem of degradation is, so that the increase in the depth of the neural network will result in a decrease in accuracy. Therefore, another TDNN network architecture [8] is proposed. The Matrix Factorization training of the network can make the network training more stable, in order to achieve better speech recognition performance.",
"cite_spans": [
{
"start": 495,
"end": 498,
"text": "[7]",
"ref_id": "BIBREF7"
},
{
"start": 968,
"end": 971,
"text": "[8]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Traditional Discriminative Training requires cross-entropy training to obtain a lattice, which must take extra time. Therefore, the extended framework of CTC is proposed, Lattice-free maximum mutual information [9] . The principle is the same as the method of MMI, the formula is as formula 1 ",
"cite_spans": [
{
"start": 211,
"end": 214,
"text": "[9]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= \u2211 log ( | , ) = \u2211 log ( | , ) ( ) \u2211 ( | \u2032 , ) \u2032 ( \u2032 )",
"eq_num": "(1)"
}
],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The overall architecture of this system is shown in Figure 1 , which include Pre-processing,",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Proposed system",
"sec_num": "3."
},
{
"text": "Deep Neural Networks Acoustic model, Decoding Graph and Recognition. The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed system",
"sec_num": "3."
},
{
"text": "In the field of speech recognition, the data augmentation is commonly used to increase the quantity of training data, avoid overfitting and improve robustness of the models. The system uses two types of data augmentation, including speed perturbation [10] and using multicondition training data [11] .In this system, the speed perturbation first generates 3 times the amount of original data, and then this data generates 15 times the amount of original data by Figure 2 . In order to obtain more context information when inputting deep neural network training, the input will be (t-1, t, t+1) three times 40-dimensional high resolution MFCC feature stitching, followed by 100-dimensional ivector features. ",
"cite_spans": [
{
"start": 251,
"end": 255,
"text": "[10]",
"ref_id": "BIBREF10"
},
{
"start": 295,
"end": 299,
"text": "[11]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 462,
"end": 470,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "3.1"
},
{
"text": "The data alignment obtained by the GMM-HMM system is used to establish a decision tree, and the number of leaves corresponds to the output dimension of the deep neural network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TDNNF Architecture",
"sec_num": "3.2.1"
},
{
"text": "Therefore, the architecture output dimension of this chapter is 2776. The TDNNF architecture is shown in Figure 4 . This architecture refers to the WSJ recipe and uses the TDNNF architecture proposed by Povey, Daniel, et al. [9] , which uses a total of 13 layers of TDNNF layer. The first layer will be 100-dimensional ivector Features and three consecutive 40dimensional MFCC features make up a total of 220-dimensional input features. Layers 2-4 are The CNN operation method is shown in Figure 5 , which is characterized by a 40*6 matrix, and consists of 3 consecutive times to form 3* 40*6 three-dimensional input matrix, before doing convolution, first zero-padding the height to become a 3*42*6 matrix, using 48 3*3*6 size filters for convolution, the output is the first layer The output of the convolutional layer, if there is subsampling, will only reduce the dimension of the height.",
"cite_spans": [
{
"start": 225,
"end": 228,
"text": "[9]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 105,
"end": 113,
"text": "Figure 4",
"ref_id": "FIGREF7"
},
{
"start": 489,
"end": 497,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "TDNNF Architecture",
"sec_num": "3.2.1"
},
{
"text": "The output dimension of the CNN-TDNNF architecture in this chapter corresponds to the number of decision tree leaves is 2776. The CNN-TDNNF architecture is shown in Figure 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 173,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 5. Convolution Neural Network Operation",
"sec_num": null
},
{
"text": "This architecture refers to the mini_librispeech recipe, uses 6 layers of convolution layer, and the first layer receives three consecutive times 40*6 The dimension of the speech feature matrix is 3*40*6. After the first layer of convolution layer operation, the output is 1*40*48. After that, each layer uses three consecutive input times, and at the 3rd, 5th and 6th layers, the height will be subsampled, and finally the output dimension will be 1*5*128. After the CNN, 9-layer The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing TDNNF layer is used, where each layer has a dimension of 1024, and the SVD decomposition dimension is 128 dimensions, and each layer of TDNNF layer uses (t-3, t, t+3) three Time is used as the input vector, so each output can get the information of the first 30 frames and the last 30 frames. The language model text dataset of this system has about 540,000 words, including: 29724 unigrams, 22123 bi-grams, and 66118 tri-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5. Convolution Neural Network Operation",
"sec_num": null
},
{
"text": "We use Character Error Rate metrics to evaluate model accuracy in Taiwanese. Character Error Rate (CER), is a common metric of the performance of a speech recognition or machine translation system. The formulas to calculated accuracy as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= + + = + + + +",
"eq_num": "(2)"
}
],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "Where S is the number of substitutions, D is the number of deletions, I is the number of insertions, C is the number of correct words, N is the number of words in the reference (N=S+D+C).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "4.2"
},
{
"text": "The Taiwanese corpus of this system is a small dataset, and the amount of data is far from the size of other language corpus, so consider whether to use tone to label phones. If tones are considered in the part of the dataset marked with phones, the number of phones will increase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "This will cause more HMM models, and vice versa, reduce the number of HMM models. And this experiment is to explore whether tones are added to the dataset as the Taiwanese corpus used by this system. The number of HMM models without adding tones is 86, which is much smaller than the HMM with adding tones, which is 299. This experiment uses 1.01 hours of testing data to test the performance of the model. The testing data has a total of 9692 syllables. Table 1 shows the experimental results of whether the corpus adds tones. It can be seen from the experimental results that in the case of the same language model, the traditional GMM-HMM system does not add tones better than tones, but the acoustic model of the deep neural network is the opposite. It can be seen that the architecture of the deep neural network has a better ability to model speech signals, so all subsequent experiments will consider the tone as the pinyin label of the Taiwanese corpus. However, in training speech recognition systems, overfitting problems are often encountered.",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "In order to solve this problem, the easiest way is to add training data. But this is a thorny problem in the case of limited data and manpower, so training data is often obtained through data augmentation. This experiment compares two data augmentation methods, including speed perturbation [10] and using multi-condition training data [11] . The increase in training data can make the model training deeper and more efficient. The second column shows the amount of data after data augmentation. The TDNNF + SP + MULTI system adds SP first and then MULTI, which increases the total amount of data by 15 times. The third and fourth columns represent the character error rate of the testing data. It can be seen that for the same acoustic model system, the character error rate decreases as the data increases. Table 3 . It can be seen that the effect of LSTM-TDNN is worse than the other three, and the best model is the CNN-LSTM-TDNN mixed three deep neural network architecture acoustic models. It can be seen that the effect of the LSTM-TDNN architecture is very poor, possibly because the number of training data in the corpus is too small. Although LSTM is an algorithm for time series training, it requires too many parameters, so a larger amount of training data is needed for training. The disadvantage of CNN is that there is no concept of time series, and the use of too many parameters leads to an increase in training time.",
"cite_spans": [
{
"start": 291,
"end": 295,
"text": "[10]",
"ref_id": "BIBREF10"
},
{
"start": 336,
"end": 340,
"text": "[11]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 809,
"end": 816,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "But shows that for acoustic models, convolutional neural networks can effectively help feature extraction in small dataset and overall deep neural network learning. Table 4 . The online decoding test text example is shown in Table 5 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 225,
"end": 232,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.3"
},
{
"text": "In this paper, we collected a Taiwanese corpus and use the architecture of the deep learning HMM model to build a Taiwanese speech recognition system. Finally, the CNN-LSTM-TDNN architecture is the best. The language model can be changed according to the domain used to greatly improve the accuracy. In our experiment, the character error rate of the testing data of inside domain is 3.95%. The experimental results show that if the text is inside domain, the Taiwanese speech recognition system does get good results. Finally, in actual application, the laboratory classmates were asked to test the online decoding character error rate of 3.06%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "\u81fa\u7063\u6b77\u6b21\u8a9e\u8a00\u666e\u67e5\u56de\u9867",
"authors": [],
"year": 2018,
"venue": "The 32nd Conference on Computational Linguistics and Speech Processing",
"volume": "13",
"issue": "",
"pages": "247--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u8449\u9ad8\u83ef. \"\u81fa\u7063\u6b77\u6b21\u8a9e\u8a00\u666e\u67e5\u56de\u9867.\" \u81fa\u7063\u8a9e\u6587\u7814\u7a76 13.2 (2018): 247-273. The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Association for Computational Linguistics and Chinese Language Processing",
"authors": [
{
"first": "Taiwan",
"middle": [],
"last": "Taipei",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Kaldi speech recognition toolkit",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 workshop on automatic speech recognition and understanding. No. CONF. IEEE Signal Processing Society",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, Daniel, et al. \"The Kaldi speech recognition toolkit.\" IEEE 2011 workshop on automatic speech recognition and understanding. No. CONF. IEEE Signal Processing Society, 2011.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mermelstein",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE transactions on acoustics, speech, and signal processing",
"volume": "28",
"issue": "",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davis, Steven, and Paul Mermelstein. \"Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences.\" IEEE transactions on acoustics, speech, and signal processing 28.4 (1980): 357-366.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Front-end factor analysis for speaker verification",
"authors": [
{
"first": "Najim",
"middle": [],
"last": "Dehak",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "19",
"issue": "",
"pages": "788--798",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dehak, Najim, et al. \"Front-end factor analysis for speaker verification.\" IEEE Transactions on Audio, Speech, and Language Processing 19.4 (2010): 788-798.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A time delay neural network architecture for efficient modeling of long temporal contexts",
"authors": [
{
"first": "",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Vijayaditya",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peddinti, Vijayaditya, Daniel Povey, and Sanjeev Khudanpur. \"A time delay neural network architecture for efficient modeling of long temporal contexts.\" Sixteenth Annual Conference of the International Speech Communication Association. 2015.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Low latency acoustic modeling using temporal convolution and LSTMs",
"authors": [
{
"first": "",
"middle": [],
"last": "Peddinti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vijayaditya",
"suffix": ""
}
],
"year": 2017,
"venue": "IEEE Signal Processing Letters",
"volume": "25",
"issue": "",
"pages": "373--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peddinti, Vijayaditya, et al. \"Low latency acoustic modeling using temporal convolution and LSTMs.\" IEEE Signal Processing Letters 25.3 (2017): 373-377.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Phoneme recognition using time-delay neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1989,
"venue": "IEEE transactions on acoustics, speech, and signal processing",
"volume": "37",
"issue": "",
"pages": "328--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waibel, Alex, et al. \"Phoneme recognition using time-delay neural networks.\" IEEE transactions on acoustics, speech, and signal processing 37.3 (1989): 328-339.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, Daniel, et al. \"Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks.\" Interspeech. 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Purely sequence-trained neural networks for ASR based on latticefree MMI",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Povey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, Daniel, et al. \"Purely sequence-trained neural networks for ASR based on lattice- free MMI.\" Interspeech. 2016.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sixteenth Annual Conference of the International Speech Communication Association",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ko, Tom, et al. \"Audio augmentation for speech recognition.\" Sixteenth Annual Conference of the International Speech Communication Association. 2015.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A study on data augmentation of reverberant speech for robust speech recognition",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Ko",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ko, Tom, et al. \"A study on data augmentation of reverberant speech for robust speech recognition.\" 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "\u53f0\u7063\u570b\u8cfd\u53f0\u8a9e(\u95a9\u5357\u8a9e)\u6717\u8b80\u7bc7\u76ee\u6574\u7406",
"authors": [],
"year": null,
"venue": "The 32nd Conference on Computational Linguistics and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"\u53f0\u7063\u570b\u8cfd\u53f0\u8a9e(\u95a9\u5357\u8a9e)\u6717\u8b80\u7bc7\u76ee\u6574\u7406,\"[Online]. Available: http://ip194097.ntcu.edu.tw/longthok/longthok.asp. The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Association for Computational Linguistics and Chinese Language Processing",
"authors": [
{
"first": "Taiwan",
"middle": [],
"last": "Taipei",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Delay Neural Network (TDNN) [5], CNN-TDNN or LSTM-TDNN [6], etc. TDNN is a deep neural network structure. It can include historical and future outputs and model long-term dependent speech signals. It was first proposed to classify phonemes in speech signals. Used",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": ", and the following changes are made: (a) The denominator FST uses training text to generate a 4-gram phone language model, and does not use backoff less than 3-gram, instead of lattice.(b) Use different training techniques to avoid Overfitting, like: L2 regularization on the network output, Cross-entropy regularization and Leaky HMM",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "System Flow diagram",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "adding multi-conditional background noise. Increase the original 10 hours of training data to 150 hours. The 39-dimensional MFCC feature is used in the GMM-HMM system, and with the addition of Cepstral Mean and Variance Normalization, the standard features of mean 0 and Variance 1 are obtained to solve the effects of different microphones and audio channels. The DNN-HMM system uses high resolution MFCC and ivector. The ivector extraction process is: (1) use 40dimensional features and 512 Gaussian training diagonal universal background model to obtain final.dubm (2) use the obtained UBM to train ivector extractors (3) Use ivector extractors to extract the ivector of each training data. Feature parameter of TDNN architecture is shown in",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "TDNN input feature The feature parameters of CNN-TDN and CNN-LSTM-TDNN architecture will first convert 40-dimensional high resolution MFCC into Mel-FilterBanks features through Inverse discrete cosine transform layer. Linear transform a 100-dimensional ivector into a 200-dimensional ivector and concatenate with Mel-FilterBanks features. Finally, convert 240-dimensional input features into 40*6. input feature map, such as Figure 3. The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "CNN-TDNN and CNN-LSTM-TDNN input feature 3.2 Deep Neural Networks Acoustic model In this chapter, before training the deep neural network acoustic model, the GMM-HMM system must be pre-trained, and the alignment result obtained by the GMM-HMM system should be used as the training target of the deep neural network acoustic model. This system models HMM at the phone level. Each phone HMM model has 3 states. In the Taiwanese Pinyin system, there are 85 phones including initials and finals. If the tone is considered, the system has 299 HMM models and GMM-HMM model training steps are mono, tri1, tri2, tri3[2]. This research establishes three DNN architectures, including (a) TDNNF, (b) CNN-TDNNF, (c) CNN-LSTM-TDNN.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "(t-1 , t , t+1) three-time input vectors, and layers 6-13 are (t-3 , t ,t+3) three time input vectors, so each frame output can get the information of the first 28 frames and the last 28 frames. The dimension of each layer is 1024, and the SVD decomposition dimension is 128. The internal The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing architecture of each TDNNF block is shown in Figure 3-10, and the output is divided into chain output and Cross-Entropy output as shown in Figure 4.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "TDNN Architecture 3.2.2 CNN-TDNNF Architecture In this section, the TDNN architecture used in section 3.3.1 is added to the CNN architecture.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"text": "CNN-TDNNF Architecture 3.2.3 CNN-LSTM-TDNN Architecture CNN-LSTM-TDNN is a deep neural network architecture designed by us. Using CNN can effectively extract feature parameters from a small corpus, and LSTM can model the advantages of long-term sequences to find out this deep neural network architecture. The output dimension of the CNN-LSTM-TDNN architecture in this chapter corresponds to the number of decision tree leaves is 2776. The CNN-LSTM-TDNN architecture is shown in which uses 6 layers of convolution layer, 8 layers of TDNN layer and 2 layers of LSTM layer, among which the convolution layer The architecture parameters are the same as in CNN-TDNNF. The TDNN layer is a general TDNN non-matrix decomposition, and the LSTM cell dimension is 1024. The dimensions of the recurrent and non-recurrent projection layer are all 256, so the input gate, forget gate and output gate input are 1024+256=1280, and then the 1024-dimensional vector is output to the Cell state and Hidden state through the Nonlinear activation function, and the final output is r_t and p_t concatenation. Each TDNN layer has a dimension of 1024 and receives (t-3, t, t+3) three time inputs, so each output can get the information of the first 33 frames and the last 33 frames, and finally output to the chain output and Cross-Entropy output. The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"text": "CNN-LSTM-TDNN Architecture 4. Experimental Results 4.1 Taiwanese Dataset The audio files and the corresponding transcriptions of Taiwanese characters were collected by Taiwan Taiwanese National Reading Competition [12] and recorded by classmates in the laboratory. The sampling frequency is 16kHz, the sampling accuracy is 16bit, and the number of channels is 1 (mono). 11.23 hours, 10439 utterances, 101596 syllables. The experimental part divides the corpus into 10.22 hours of training data and 1.01 hours of testing data. Lexicon grabs each word and corresponding phone from Taiwanese transcription, and adds the Taiwanese vocabulary of the text of the language model to lexicon, a total of 31331 words are obtained.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Compare whether the dataset adds tones",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Model</td><td>With tone(CER%)</td><td>Without tone(CER%)</td></tr><tr><td>Mono</td><td>36.32</td><td>29.94</td></tr><tr><td>Tri1</td><td>27.24</td><td>26.06</td></tr><tr><td>Tri2</td><td>24.39</td><td>23.68</td></tr><tr><td>Tri3</td><td>19.13</td><td>17.33</td></tr><tr><td>TDNNF</td><td>10.21</td><td>12.12</td></tr></table>"
},
"TABREF1": {
"text": "shows the comparison results of adding different data augmentation methods. The first column is a different system.Take the TDNNF architecture as the acoustic model and add SP and multi respectively, where SP stands for speed perturbation and MULTI stands for using multi-condition training data,The 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language Processing",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "Comparison of data augmentation methods In order to explore the impact of different deep neural network models on acoustic models, four sets of models were set up in this experiment, including TDNNF, CNN-TDNNF, LSTM-TDNN, and CNN-LSTM-TDNN. The recognition results of each deep neural network acoustic model are shown in",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>System</td><td>Duration of data (hours)</td><td>CER (%)</td><td>Error/Total</td></tr><tr><td>TDNNF</td><td>10</td><td>10.21</td><td>991 / 9692</td></tr><tr><td>TDNNF + SP</td><td>30</td><td>8.91</td><td>864 / 9692</td></tr><tr><td>TDNNF + MULTI</td><td>50</td><td>8.14</td><td>789 / 9692</td></tr><tr><td>TDNNF + SP +</td><td>150</td><td>7.94</td><td>770 / 9692</td></tr><tr><td>MULTI</td><td/><td/><td/></tr></table>"
},
"TABREF3": {
"text": "Comparison of different model recognition resultsThe 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020) Taipei, Taiwan, September 24-26, 2020. The Association for Computational Linguistics and Chinese Language ProcessingFinally, a total of 10 people in the laboratory are asked to do an online decoding test. Each person tests 15 sentences. The test text is a daily language in Taiwanese, and the training text of the language model has been added. There are 150 sentences and 1078 syllables in total. The recognition results are shown in",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>Model</td><td colspan=\"2\">CER (%) Error/Total</td><td>ins</td><td>del</td><td>Sub</td></tr><tr><td>TDNNF</td><td>7.94</td><td>770 / 9692</td><td>79</td><td>91</td><td>600</td></tr><tr><td>CNN-TDNNF</td><td>7.68</td><td>744 / 9692</td><td>80</td><td>56</td><td>608</td></tr><tr><td>LSTM-TDNN</td><td>10.20</td><td>989 / 9692</td><td>91</td><td>103</td><td>796</td></tr></table>"
},
"TABREF4": {
"text": "Online decoding test results",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"2\">CER (%)</td><td>Error/Total</td><td>ins</td><td>del</td><td>Sub</td></tr><tr><td>Online decoding test</td><td>3.06</td><td>33 / 1078</td><td>6</td><td>9</td><td>18</td></tr><tr><td colspan=\"3\">Table 5. Online decoding test text</td><td/><td/><td/></tr><tr><td>Testing data number</td><td/><td>Text</td><td/><td/><td/></tr><tr><td>1</td><td/><td colspan=\"2\">gua2 beh4 khi3 tai5 pak4</td><td/><td/></tr><tr><td>2</td><td colspan=\"4\">gua2 siunn7 beh4 tshut4 khi3 tshit4 tho5</td><td/></tr><tr><td>3</td><td colspan=\"4\">gua2 siunn7 beh4 tsiah8 mih8 kiann7</td><td/></tr><tr><td>4</td><td/><td colspan=\"3\">kin1 a2 lit8 thinn1 khi3 be7 bai2</td><td/></tr><tr><td>5</td><td/><td colspan=\"2\">u7 siann2 mih4 ho2 tsiah8 e5</td><td/><td/></tr><tr><td>\u2026</td><td/><td>\u2026</td><td/><td/><td/></tr><tr><td>150</td><td/><td colspan=\"3\">gua2 beh4 khi3 siong2 kho3 ah4</td><td/></tr></table>"
}
}
}
}