|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:34:09.929990Z" |
|
}, |
|
"title": "Improving the Language Model for Low-Resource ASR with Online Text Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Hjortnaes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indiana University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Timofey", |
|
"middle": [], |
|
"last": "Arkhangelskiy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hamburg", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Niko", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Helsinki", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Eastern", |
|
"location": { |
|
"country": "Finland" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indiana University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper, we expand on previous work on automatic speech recognition in a low-resource scenario typical of data collected by field linguists. We train DeepSpeech models on 35 hours of dialectal Komi speech recordings and correct the output using language models constructed from various sources. Previous experiments showed that transfer learning using DeepSpeech can improve the accuracy of a speech recognizer for Komi, though the error rate remained very high. In this paper we present further experiments with language models created using KenLM from text materials available online. These are constructed from two corpora, one containing literary texts, one for social media content, and another combining the two. We then trained the model using each language model to explore the impact of the language model data source on the speech recognition model. Our results show significant improvements of over 25% in character error rate and nearly 20% in word error rate. This offers important methodological insight into how ASR results can be improved under low-resource conditions: transfer learning can be used to compensate the lack of training data in the target language, and online texts are a very useful resource when developing language models in this context.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper, we expand on previous work on automatic speech recognition in a low-resource scenario typical of data collected by field linguists. We train DeepSpeech models on 35 hours of dialectal Komi speech recordings and correct the output using language models constructed from various sources. Previous experiments showed that transfer learning using DeepSpeech can improve the accuracy of a speech recognizer for Komi, though the error rate remained very high. In this paper we present further experiments with language models created using KenLM from text materials available online. These are constructed from two corpora, one containing literary texts, one for social media content, and another combining the two. We then trained the model using each language model to explore the impact of the language model data source on the speech recognition model. Our results show significant improvements of over 25% in character error rate and nearly 20% in word error rate. This offers important methodological insight into how ASR results can be improved under low-resource conditions: transfer learning can be used to compensate the lack of training data in the target language, and online texts are a very useful resource when developing language models in this context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Speech recognition has lots of potential to be a highly useful technology while working with the world's various endangered languages. In recent years numerous studies have been conducted on this topic, and especially the recent work on phoneme recognition is reaching very promising results on endangered language corpora (Wisniewski et al., 2020; Michaud et al., 2019) . Persephone (Adams et al., 2018) and Elpis (Foley et al., 2018) have been the most widely used systems in the language documentation context, but as the field is rapidly evolving, various methods are available. One of them is Mozilla's DeepSpeech (Hannun et al., 2014) . By finding new, more effective ways to use these methods, we can open up their usage to low resource languages. Language documentation is one area in which these methods can be applied. Levow et al. (2017) propose a number of tasks demonstrating the usefulness of natural language processing to language documentation, including speech processing. In this paper we report our latest results using DeepSpeech. In a recent study we investigated the usefulness of automatic speech-recognition (ASR) in a low-resource scenario, which is generalizable for the fieldwork-based documentation of a medium-size endangered language of Russia (Hjortnaes et al., 2020) . We ran various experiments with 35 hours of spoken dialectal Zyrian Komi (Permic < Uralic, henceforth Komi) to optimize the training parameters for DeepSpeech 1 and explored the impact of transfer learning on our corpus. Although the work was a promising start, further research was acutely needed as the reached accuracy 1 https://github.com/mozilla/DeepSpeech was very low. In this study, we continue our previous work by exploring new potential methods to improve our results. Specifically, we are looking at how to improve the language model (LM) in order to reach higher accuracy in the ASR. In our earlier experiments, tuning the language model was able to produce slightly better results, though these were very small improvements. We presume that a domain mismatch may have been involved in our previous low results with the language model despite its relatively large size. The language model we used previously was developed from more formal varieties of written language (literature and Wikipedia), it did not match the domain of the speech data to be recognized, i.e. more informal spoken language recordings. Two obvious differences are the frequent use of discourse particles and code switching to the majority language, Russian, which are atypical of written language. Other differences between written and spoken language are the insertion of dialectal words or word forms and the preference for shorter syntactic units in the latter. Since written language used in social media tends to be informal (Arkhangelskiy, 2019) we hypothesized that a language model based on a social media corpus would result in significantly higher ASR accuracy. In fact, Komi is actively used in social media today and useful corpus data has recently been published by Timofey Arkhangelskiy. The Komi-Zyrian corpora 2 consist of two different sections: a standard written language corpus of 1.76 million words (called the \"Main Corpus\"), and the \"Social Me-", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 348, |
|
"text": "(Wisniewski et al., 2020;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 349, |
|
"end": 370, |
|
"text": "Michaud et al., 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 404, |
|
"text": "(Adams et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 435, |
|
"text": "(Foley et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 640, |
|
"text": "(Hannun et al., 2014)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 829, |
|
"end": 848, |
|
"text": "Levow et al. (2017)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1275, |
|
"end": 1299, |
|
"text": "(Hjortnaes et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2818, |
|
"end": 2839, |
|
"text": "(Arkhangelskiy, 2019)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Wiki and Books 1.78M tokens Literary 1.39M tokens Social 1.37M tokens Combined 2.76M tokens Table 1 : Token counts for the corpora used to create the language models. The Wiki and Books corpus also includes the Komi Republic Website and some newspaper articles.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 99, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Speech 35 hours", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "dia Corpus\" of 1.85 million words. Since these corpora are of comparable size and are a closer domain match to our speech corpus than the materials used to build the language model in our previous study, the conditions are promising to test how the larger text model influences the results. Additionally, across these two corpora there are differences in variational sociolinguistic features, which should be taken into account during testing. The Main Corpus contains contemporary on-line press texts. Therefore it matches closely with standard written Komi. The Social Media Corpus, on the other hand, contains posts from the social media platform VKontakte 3 and therefore represents the contemporary language of informal digital communication. One significant advantage in the use of online texts is that they are available for a considerable number of minority languages and can be harvested relatively easily. Our approach for Komi is therefore generalizable to other languages, although we believe that specific conditions have to be met for endangered languages to have online materials available in sufficient quantity and quality. For online language vitality, see, e.g. Kornai (2015; Gibson (2016) . First, internet access is a logical precondition as well as the basic technology for digital use of the written language, especially keyboard layouts for various platforms. Furthermore, the language needs to have a sufficiently large number of speakers and a literary standard vital enough that the speakers are familiar in writing the language (a case of another language, Kildin Saami, which does not have these conditions is described by Rie\u00dfler (2013) ). Online texts also have an accumulative nature, so that the corpus grows incrementally from day to day. Therefore even a relatively limited amount of online presence can, in few decades, result in a substantially large corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1181, |
|
"end": 1194, |
|
"text": "Kornai (2015;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1195, |
|
"end": 1208, |
|
"text": "Gibson (2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1652, |
|
"end": 1666, |
|
"text": "Rie\u00dfler (2013)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Speech 35 hours", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The speech data used is described more thoroughly in a previous study (Hjortnaes et al., 2020) , so we only discuss it here briefly. The corpus itself will be available in the Language Bank of Finland (Blokland et al., 2020) during the spring 2020, and contains 35 hours of aligned transcriptions, primarily from northernmost Komi dialects. The transcription conventions used are close to the written standard. They use Cyrillic script, but include small adaptations to reflect dialectal differences. These adaptations are similar to texts in the recent Komi dialect dictionary by Beznosikova et al. (2012) . Large portions of these materials are also available, and can be studied, via a community-oriented online portal (Fedina and Lev\u010denko, 2017; Blokland et al., 2016 Blokland et al., 2020 . 4 This language documentation dataset is used to train the DeepSpeech model itself, but a language model is an essential part of the DeepSpeech architecture, as it is used to adjust the model's output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 94, |
|
"text": "(Hjortnaes et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 224, |
|
"text": "(Blokland et al., 2020)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 581, |
|
"end": 606, |
|
"text": "Beznosikova et al. (2012)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 722, |
|
"end": 749, |
|
"text": "(Fedina and Lev\u010denko, 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 771, |
|
"text": "Blokland et al., 2016", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 772, |
|
"end": 793, |
|
"text": "Blokland et al., 2020", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 797, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Acquisition", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The language model used in this study is derived entirely from materials that are online. What it comes to endangered languages spoken in Russia, there is a long tradition of related work. Several corpora based on internet data have been published in in Russia in recent years, e.g. Orekhov et al. (2016) and Krylova et al. (2015) , and more recent, similar work has also been conducted in Finland (Jauhiainen et al., 2019) . In the future, it could be a promising avenue to combine all these sources, but for our current work we focus on one set of text corpora published last year (Arkhangelskiy, 2019), see above. The kenlm language model (Heafield, 2011) used by Deep-Speech takes as input a plain-text file with one sentence per line. These were obtained from the annotated corpus files using tsakorpus2kenlm script 5 . Since it is common for social media data to be noisy and contain code switching (Baldwin et al., 2013) , automatic language tagging and some text cleaning were performed when building the corpus. The latter included fixing characters with diacritics typed in one of the popular conventions, e.g. replacing \u043e: with \u00f6 or Latin i with its identically looking Cyrillic counterpart. Therefore, the social media language model was based on somewhat cleaner data than the original social media posts. The conversion included two additional cleaning stages. First, only sentences with less than one-third OOV words (as determined by a rule-based Komi analyzer 6 ) were included, to avoid wrongly tagged Russian sentences. Second, some numerals represented with digits were replaced with text, e.g. 2 was replaced with \u043a\u044b\u043a. All punctuation was removed. The resulting datasets used with kenlm contain 1.39M words in 153K sentences for the Main Corpus and 1.37M words in 231K sentences for the Social Media Corpus. Although these preprocessing steps were conducted the social media data in mind, in principle similar adjustments could possibly be useful also in other contexts where noisy text data is used in ASR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 304, |
|
"text": "Orekhov et al. (2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 330, |
|
"text": "Krylova et al. (2015)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 423, |
|
"text": "(Jauhiainen et al., 2019)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 658, |
|
"text": "(Heafield, 2011)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 927, |
|
"text": "(Baldwin et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data Acquisition", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "In our previous work (Hjortnaes et al., 2020) , we investigated the benefit of transfer learning and found that the best results were achieved with a learning rate of 0.00001 and dropout of 10% when using transfer learning. Our model, both in the previous work and here, is the DeepSpeech 7 architecture (Hannun et al., 2014; Ardila et al., 2020) . Deep-Speech is a relatively simple five layer neural network with one bi-directional LSTM layer. It takes audio as input and outputs a stream of characters, which are then corrected by the language model to produce the final output. The transfer learning branch 8 allows us to reset the last n layers of the network, crucially adjusting for differences in the alphabet size between the source and target language when using transfer learning. We found that resetting the last 2 layers, which does not include the LSTM layer, was most effective. These hyper-parameters are corroborated in Meyer (2019) . In this study we continue to examine what kinds of further benefits can be gained by improving the language model. For these experiments, we constructed the language models using kenlm (Heafield, 2011) , as described in section 2. We then trained the speech recognition model on the same set of audio data described above, changing only the source of data used for the language model. Finally, we tuned the alpha and beta hyper-parameters of the language model which control how much we weight the LM over the output of the acoustic model and the cost of inserting spaces to separate words respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 45, |
|
"text": "(Hjortnaes et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 304, |
|
"end": 325, |
|
"text": "(Hannun et al., 2014;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 346, |
|
"text": "Ardila et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 937, |
|
"end": 949, |
|
"text": "Meyer (2019)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1137, |
|
"end": 1153, |
|
"text": "(Heafield, 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Size CER (%) WER (%) Table 2 : The best results for each source of data used to construct the language model. The CER and WER do not necessarily come from the same hyper-parameters used to integrate the language model into the speech recognition system (Hjortnaes et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 277, |
|
"text": "(Hjortnaes et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The previous results are presented alongside our newest results in Table 2 and alone in Table 3 . The best word error rate (WER) was achieved with tuned language model parameters using transfer learning (see Table 3 ). However, the best character error rate (CER) when using the Wikipedia corpus was achieved by disabling the LM entirely. The domain appropriate corpora, however, produced language models which significantly improved upon both the Wikipedia LM and disabling the LM altogether with a CER improvement of over 25% and a WER improvement of nearly 20%. What is particularly interesting here is that both the literary language model and social media language model resulted in a very similar level of improvement to the performance. The combined model yielded an even greater improvement, though only by about 2%. This goes against the hypothesis that domain would be the most crucial factor here, and calls for further work on various text types. The hyper-parameters of the language models show many similarities across difference source corpora. In all but the Wikipedia and Book corpus, the best CER was obtained when beta was set to 1, and the best WER in all cases was 8 https://github.com/mozilla/DeepSpeech/tree/ transfer-learning2 with a beta of 1 as well, meaning that the language model favors inserting fewer spaces. When beta becomes large, the predictions tend towards single character words regardless of how long the gold standard is. For all LM corpus sources, as alpha increases, which favors the language model over the output of the acoustic model, the WER goes down, but the CER goes up. This implies that the LM is properly correcting words, but at the cost of other characters.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 95, |
|
"text": "Table 2 and alone in Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 208, |
|
"end": 215, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Language model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It can be observed that many of the remaining errors relate to Russian code-switching within a sentence, and to dialectal forms that do not have corresponding variant in either of the text corpora used. In the following examples the incorrectly recognized words are marked with bold in the Komi sentence, and the Russian parts are marked with italics in both the Komi sentence and Russian translation. The source lines are on top and the system's predictions are below them. Example (1) [CER: 10.0, WER: 55.5] 9 shows an almost correctly recognized sentence, where the main problems are in words that contain dialectal morphology. Here we see that the combined model tries to suggest the comitative case form -\u043a\u00f6\u0434 from literary Komi, and in the last wordform another dialectal comitative -\u043a\u0435\u0434 goes entirely unrecognized. In Example (2) [CER: 15.0, WER: 54.5] an individual borrowed Russian verb \u0441\u043f\u043e\u043d\u0441\u0438\u0440\u0443\u0439\u0442\u043d\u044b 'to sponsor' seems to create conditions where the model fails. It is highly unlikely that such borrowed and loosely adapted items would occur in the language model. The same example, however, displays correctly recognized the Russian noun \u0441\u0442\u0440\u0430\u0445\u043e\u0432\u043a\u0430 'insurance'. This illustrates how in this kind of multilingual context drawing an exact line between the languages in contact is very difficult. The Example (3) [CER: 19.0, WER: 128.5] shows how for an entirely Russian sentence, the language model is not able at all to produce the correct output, but tries to create words in standard Komi. Also here the Russian word \u0434\u0435\u043b\u043e 'thing, issue' is transformed into Komi \u0434\u0435\u043b\u00f6, which would be a good approximation of how this word is often pronounced in Komi. However, this shows how finding ways to deal with Russian content is one of the major challenges with In Example (4) [CER: 15.0, WER: 53.8] we see a different issue. Careful listening to the original audio reveals that there truly is a segment like \u043c\u044b\u0439 or \u043c\u044b\u0439\u043a\u0435 (i.e. a pronoun which is not clearly pronounced in the recording), although it is missing from the transcriptions. In this case the model does indeed capture something which the human transcriber didn't. On that note, detecting such mistakes in the original data would generally be a highly useful domain for speech technologies. This example also contains a Russian sequence \u043c\u0430\u043b\u043e \u0442\u043e\u0433\u043e \u0447\u0442\u043e 'not only, but', which the model, as expected, is not able to analyze. Despite the abundance of Russian confounding our results, there is a very clear difference between the accuracy of the speech recognition as a whole for different corpora. Despite being smaller, both the social media corpus and the literary corpus outperformed the larger Wikipedia corpus. This demonstrates the importance of domain in the choice of corpus for constructing the language model. Size has an impact, as can be seen from the improvement yielded by combining the literary and social media corpus, but it is far less than using a corpus of a more similar domain to the audio data. In this case, online data offers that similarity and improves our results drastically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1318, |
|
"end": 1329, |
|
"text": "[CER: 19.0,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1330, |
|
"end": 1341, |
|
"text": "WER: 128.5]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "As the combined language model was twice as large as the individual models alone, yet offered very little improvement, it remains inconclusive how large the further improvements could be with an even larger language model. We expect that simply increasing the size of the corpus will offer diminishing returns. However, we have demonstrated that creating the language model from available online materials is a very promising and effective way to improve the speech recognition in a low-resource context. By extension, this demonstrates concretely the importance of using quality data of an appropriate domain over simply using as much data as possible. Although the error rates are still relatively high, we are fast approaching a level where the ASR output starts to be sensible and useful for various purposes, primary of which would be to make transcription easier. It is also noteworthy that the current speech dataset contains over 200 different speakers in very varying recording conditions, which is a realistic scenario for a corpus of fieldwork recordings. There is also a large amount of overlapping speech. Despite these challenges we have been able to produce relatively solid results. Therefore our study is a relevant new contribution in the line of work that attempts to eventually combine ASR systems with the fieldwork-based work of documentary linguistics. Further experiments in this direction could include even bigger language models, which would firmly establish the role corpus size plays in the effectiveness of the LM. The National Komi Corpus 10 currently contains more than 60 million tokens. This may sound unusually large for a minority language. However, as this body of texts is based on published literature, including printed books and periodicals, which have been printed in a similar, if not higher, magnitude for several other languages of the ethnic Republics of the Soviet Union and Russia, building corpora of comparable size should be possible for various minority languages of Russia as well as other minority languages in similar situations (e.g. in Western Europe). We are aware that printed books and periodicals in endangered languages are not typical of endangered languages globally. How-beta CER/WER 1 3 5 7 9 0.25 46.1/88.6 46.8/100.0 51.2/100.0 58.5/100.0 69.9/100.0 alpha 0.5 48.9/81.6 47.2/93.6 49.0/100.0 54.6/100.0 64.2/100.0 0.75 53.5/81.8 50.1/85.9 49.5/100.0 53.1/100.0 61.3/100.0 Table 5 : The impact of tuning the language model parameters on Character and Word Error Rates for the social media language model. beta CER/WER 1 3 5 7 9 0.25 44.7/86.9 45.5/100.0 49.7/100.0 56.7/100.0 675/100.0 alpha 0.5 47.1/80.0 45.5/91.1 47.5/100.0 52.6/100.0 61.4/100.0 0.75 51.2/79.8 48.1/83.7 47.5/100.0 50.9/100.0 58.2/100.0 Table 6 : The impact of tuning the language model parameters on Character and Word Error Rates for the combined corpus language model. ever, user-generated online communication, through websites and social media, seem to be becoming more and more available, even in contexts where standard literary materials are lacking.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 2439, |
|
"end": 2446, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2773, |
|
"end": 2780, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "The experiment here used online corpora as a source of spontaneous colloquial data that resembles the spoken transcriptions more than literary standard texts would. Although we have shown there is still some ambiguity in the kind of data we needed to improve the language model, we can experiment more in this direction in the future. For instance, there are numerous text collections available consisting of transcribed dialectal speech similar to those fieldwork-based recordings our ASR system is analysing. For these text collections there is no corresponding audio available. Potential of combining various legacy datasets systematically into language documentation corpora has been discussed before (Blokland et al., 2019) , but the benefit for speech recognition may have not been previously recognized. Logically, without audio we can't use these texts in training the ASR system itself, but they could be potentially very useful as a new source of an enriched language model, matching our own speech data perfectly. Apart from simply collecting more data, finding a way to address the Russian which exists in the speech data is a potential avenue for improvement, as the current model essentially ignores it. These language models were constructed exclusively using Komi data, so any Russian which does not exactly match a Komi analogue will be a source of error.", |
|
"cite_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 728, |
|
"text": "(Blokland et al., 2019)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion & Future work", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "http://komi-zyrian.web-corpora.net", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://vk.ru", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://videocorpora.ru/ 5 https://bitbucket.org/timarkh/tsakorpus2kenlm/ 6 https://github.com/timarkh/ uniparser-grammar-komi-zyrian 7 https://github.com/mozilla/DeepSpeech", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Individual sentences report the number of incorrect characters for CER, not the percentage of incorrect as in WER.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://komicorpora.ru/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was supported in part by Lilly Endowment, Inc., through its support for the Indiana University Pervasive Technology Institute. Niko Partanen and Michael Rie\u00dfler collaborate within the project Language Documentation meets Language Technology: The Next Step in the Description of Komi, funded by Kone Foundation, Finland. Timofey Arkhangelskiy was supported by the Alexander von Humboldt foundation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluating phonemic transcription of low-resource tonal languages for language documentation", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Neubig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cruz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Michaud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adams, O., Cohn, T., Neubig, G., Cruz, H., Bird, S., and Michaud, A. (2018). Evaluating phonemic transcription of low-resource tonal languages for language documen- tation. In Proceedings of LREC 2018.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Common Voice", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ardila", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Branson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Henretty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Kohler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Meyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Morais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Saunders", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Weber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ardila, R., Branson, M., Davis, K., Henretty, M., Kohler, M., Meyer, J., Morais, R., Saunders, L., Tyers, F. M., and Weber, G. (2020). Common Voice. In Proceedings of the 12th Conference on Language Resources and Eval- uation (LREC 2020).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Corpora of social media in minority Uralic languages", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Arkhangelskiy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Fifth International Workshop on Computational Linguistics for Uralic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--140", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Arkhangelskiy, T. (2019). Corpora of social media in mi- nority Uralic languages. In Proceedings of the Fifth In- ternational Workshop on Computational Linguistics for Uralic Languages, pages 125-140. Tartu.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "How Noisy Social Media Text, How Diffrnt Social Media Sources", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Cook", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mackinlay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wang", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "356--364", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baldwin, T., Cook, P., Lui, M., MacKinlay, A., and Wang, L. (2013). How Noisy Social Media Text, How Diffrnt Social Media Sources. In International Joint Confer- ence on Natural Language Processing, pages 356-364. Nagoya, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Komi s\u00ebrnisikas kyv\u010dyk\u00f6r", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Beznosikova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Ajbabina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Zaboeva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Kosnyreva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beznosikova, L. M., Ajbabina, E. A., Zaboeva, N. K., and Kosnyreva, R. I. (2012). Komi s\u00ebrnisikas kyv\u010dyk\u00f6r. Kola, Syktyvkar.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Komi mediateka", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Blokland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Chuprov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Levchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Fedina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Fedina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blokland, R., Chuprov, V., Levchenko, D., Fedina, M., Fe- dina, M., Partanen, N., and Rie\u00dfler, M. (2016-2020). Komi mediateka.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Using computational approaches to integrate endangered language legacy data into documentation corpora", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Blokland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wilbur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Workshop on the Use of Computational Methods in the Study of Endangered Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blokland, R., Partanen, N., Rie\u00dfler, M., and Wilbur, J. (2019). Using computational approaches to integrate en- dangered language legacy data into documentation cor- pora. In Workshop on the Use of Computational Methods in the Study of Endangered Languages, pages 24-30.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Spoken Komi Corpus. The Language Bank of Finland", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Blokland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Fedina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blokland, R., Fedina, M., Partanen, N., and Rie\u00dfler, M. (2020). Spoken Komi Corpus. The Language Bank of Finland.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Iz opyta sozdanija komi mediateki", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Fedina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lev\u010denko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Dmitriy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "\u0116lektronnaja pismennost narodov Rossijskoj Federacii", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "220--227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fedina, M. S. and Lev\u010denko, Dmitriy, A. (2017). Iz opyta sozdanija komi mediateki. In Marina S. Fedina, editor, \u0116lektronnaja pismennost narodov Rossijskoj Federacii, pages 220-227. Syktyvkar.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Building speech recognition systems for language documentation", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Foley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Coto-Solano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Durantin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Ellison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Van Esch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Heath", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kratochvil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Maxwell-Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Nash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of Spoken Language Technologies for Under-resourced Languages (SLTU 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "205--209", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Foley, B., Arnold, J. T., Coto-Solano, R., Durantin, G., El- lison, T. M., van Esch, D., Heath, S., Kratochvil, F., Maxwell-Smith, Z., Nash, D., et al. (2018). Building speech recognition systems for language documentation. In Proceedings of Spoken Language Technologies for Under-resourced Languages (SLTU 2018), pages 205- 209.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Assessing digital vitality", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gibson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the LREC 2016 Workshop, CCURL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "46--51", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gibson, M. (2016). Assessing digital vitality. In Proceed- ings of the LREC 2016 Workshop, CCURL, pages 46-51. Portoro\u017e.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Deep Speech", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Hannun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Case", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Casper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Catanzaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Diamos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Elsen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Prenger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Satheesh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sengupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Coates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannun, A., Case, C., Casper, J., Catanzaro, B., Diamos, G., Elsen, E., Prenger, R., Satheesh, S., Sengupta, S., Coates, A., and Ng, A. Y. (2014). Deep Speech.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Kenlm", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Heafield", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the sixth workshop on statistical machine translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "187--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heafield, K. (2011). Kenlm. In Proceedings of the sixth workshop on statistical machine translation, pages 187- 197.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Towards a speech recognizer for Komi, an endangered and low-resource Uralic language", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Hjortnaes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Partanen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the Sixth International Workshop on Computational Linguistics of Uralic Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hjortnaes, N., Partanen, N., Rie\u00dfler, M., and M. Tyers, F. (2020). Towards a speech recognizer for Komi, an en- dangered and low-resource Uralic language. In Proceed- ings of the Sixth International Workshop on Computa- tional Linguistics of Uralic Languages, pages 31-37. As- sociation for Computational Linguistics, Vienna.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Wanca in Korp", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Jauhiainen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lind\u00e9n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Data and humanities (RDHUM) 2019 Conference: Data, methods and tools", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jauhiainen, H., Jauhiainen, T., and Lind\u00e9n, K. (2019). Wanca in Korp. In Data and humanities (RDHUM) 2019 Conference: Data, methods and tools, page 21.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A new method of language vitality assessment", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kornai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Linguistic and Cultural Diversity in Cyberspace", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "132--138", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kornai, A. (2015). A new method of language vitality assessment. Linguistic and Cultural Diversity in Cy- berspace, pages 132-138.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Languages of Russia", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Krylova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Orekhov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Stepanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zaydelman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Russian Summer School in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "179--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krylova, I., Orekhov, B., Stepanova, E., and Zaydelman, L. (2015). Languages of Russia. In Russian Summer School in Information Retrieval, pages 179-185.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "STREAMLInED challenges: Aligning research interests with shared tasks", |
|
"authors": [ |
|
{ |
|
"first": "G.-A", |
|
"middle": [], |
|
"last": "Levow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Bender", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Littell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Howell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chelliah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Crowgey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Garrette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Good", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Hargus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Inman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Maxwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tjalve", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "39--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levow, G.-A., Bender, E. M., Littell, P., Howell, K., Chel- liah, S., Crowgey, J., Garrette, D., Good, J., Hargus, S., Inman, D., Maxwell, M., Tjalve, M., and Xia, F. (2017). STREAMLInED challenges: Aligning research interests with shared tasks. In Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endan- gered Languages, pages 39-47, Honolulu, March. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Multi-task and transfer learning in lowresource speech recognition", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Meyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Meyer, J. (2019). Multi-task and transfer learning in low- resource speech recognition. Ph.D. thesis, University of Arizona.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Phonetic lessons from automatic phonemic transcription", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Michaud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Adams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Cox", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guillaume", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "19th International Congress of Phonetic Sciences (CPhS XIX)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michaud, A., Adams, O., Cox, C., and Guillaume, S. (2019). Phonetic lessons from automatic phonemic tran- scription. In 19th International Congress of Phonetic Sciences (CPhS XIX). Melbourne.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Russian minority languages on the web", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Orekhov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Krylova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Popov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Stepanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zaydelman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Computational Linguistics and Intellectual Technologies: Proceedings of the International Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "498--508", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Orekhov, B., Krylova, I., Popov, I., Stepanova, L., and Za- ydelman, L. (2016). Russian minority languages on the web. In Computational Linguistics and Intellectual Tech- nologies: Proceedings of the International Conference \"Dialogue 2016, pages 498-508.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Towards a digital infrastructure for Kildin Saami", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rie\u00dfler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Sustaining indigenous knowledge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "195--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rie\u00dfler, M. (2013). Towards a digital infrastructure for Kildin Saami. In Erich Kasten et al., editors, Sustaining indigenous knowledge, pages 195-218. Kulturstiftung Sibirien.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Phonemic transcription of low-resource languages", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Wisniewski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Guillaume", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Michaud", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of Spoken Language Technologies for Under-resourced Languages", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wisniewski, G., Guillaume, S., and Michaud, A. (2020). Phonemic transcription of low-resource languages. In Proceedings of Spoken Language Technologies for Under-resourced Languages (SLTU 2020).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "at the tundra for seven years, built a house (in that time) with my brothers and uncles.'", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "going to sponsor us as long we don't have an insurance'", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "The impact of tuning the language model parameters on Character and Word Error Rates for the Wikipedia dump language model.(Hjortnaes et al., 2020)", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "well, not only did he speak well, maybe he [even] thought in the Komi language\u2026'", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "The impact of tuning the language model parameters on Character and Word Error Rates for the literary corpus language model.", |
|
"content": "<table><tr><td colspan=\"5\">the language documentation data we are working on. The</td></tr><tr><td colspan=\"5\">problems are certainly similar in other highly multilingual</td></tr><tr><td colspan=\"2\">contexts.</td><td/><td/><td/></tr><tr><td>(3)</td><td>\u044d\u0442\u043e \u043e\u0447\u0435\u043d\u044c \u0441\u043b\u043e\u0436\u043d\u043e\u0435</td><td>\u0434\u0435\u043b\u043e</td><td>\u043d\u0435</td><td>\u0432\u0441\u044f\u043a\u043e\u043c\u0443 \u0438\u0434\u0451\u0442</td></tr><tr><td/><td>\u0442\u0430 \u0432\u043e\u0447\u0438\u0441 \u043b\u043e\u043d\u044b</td><td>\u0434\u0435\u043b\u00f6</td><td>\u043d\u0435</td><td>\u0441\u044f \u043a\u043e \u043c\u044b\u0439 \u0438 \u0434\u0435</td></tr><tr><td/><td colspan=\"4\">'This is a very difficult issue, it does not fit every-</td></tr><tr><td/><td>one\u2026'</td><td/><td/><td/></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |