Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L18-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:33:05.938421Z"
},
"title": "Augmenting Librispeech with French Translations: A Multimodal Corpus for Direct Speech Translation Evaluation",
"authors": [
{
"first": "Ali",
"middle": [
"Can"
],
"last": "Kocabiyikoglu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Olivier",
"middle": [],
"last": "Kraif",
"suffix": "",
"affiliation": {
"laboratory": "LIDILEM",
"institution": "UGA",
"location": {
"settlement": "Grenoble",
"country": "France"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent works in spoken language translation (SLT) have attempted to build end-to-end speech-to-text translation without using source language transcription during learning or decoding. However, while large quantities of parallel texts (such as Europarl, OpenSubtitles) are available for training machine translation systems, there are no large (>100h) and open source parallel corpora that include speech in a source language aligned to text in a target language. This paper tries to fill this gap by augmenting an existing (monolingual) corpus: LibriSpeech. This corpus, used for automatic speech recognition, is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. After gathering French e-books corresponding to the English audio-books from LibriSpeech, we align speech segments at the sentence level with their respective translations and obtain 236h of usable parallel data. This paper presents the details of the processing as well as a manual evaluation conducted on a small subset of the corpus. This evaluation shows that the automatic alignments scores are reasonably correlated with the human judgments of the bilingual alignment quality. We believe that this corpus (which is made available online) is useful for replicable experiments in direct speech translation or more general spoken language translation experiments.",
"pdf_parse": {
"paper_id": "L18-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent works in spoken language translation (SLT) have attempted to build end-to-end speech-to-text translation without using source language transcription during learning or decoding. However, while large quantities of parallel texts (such as Europarl, OpenSubtitles) are available for training machine translation systems, there are no large (>100h) and open source parallel corpora that include speech in a source language aligned to text in a target language. This paper tries to fill this gap by augmenting an existing (monolingual) corpus: LibriSpeech. This corpus, used for automatic speech recognition, is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned. After gathering French e-books corresponding to the English audio-books from LibriSpeech, we align speech segments at the sentence level with their respective translations and obtain 236h of usable parallel data. This paper presents the details of the processing as well as a manual evaluation conducted on a small subset of the corpus. This evaluation shows that the automatic alignments scores are reasonably correlated with the human judgments of the bilingual alignment quality. We believe that this corpus (which is made available online) is useful for replicable experiments in direct speech translation or more general spoken language translation experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Attention-based encoder-decoder approaches have been very successful in Machine Translation (Bahdanau et al., 2014) , and have shown promising results in Endto-End Speech Translation (B\u00e9rard et al., 2016; Weiss et al., 2017 ) (translation from raw speech, without any intermediate transcription). End-to-End speech translation is also attractive for language documentation, which often uses corpora made of audio recordings aligned with their translation in another language (no transcript in the source language) (Blachon et al., 2016; Adda et al., 2016; Anastasopoulos and Chiang, 2017) . However, while large quantities of parallel texts (such as Europarl, OpenSubtitles) are available for training (text) machine translation systems, there are no large (>100h) and open source parallel corpora that include speech in a source language aligned to text in a target language. For End-to-End speech translation, only a few parallel corpora are publicly available. For example, Fisher and Callhome Spanish-English corpora provide 38 hours of speech transcriptions of telephonic conversations aligned with their translations (Post et al., 2013) . However, these corpora are only medium size and contain low-bandwidth recordings. Microsoft Speech Language Translation (MSLT) corpus also provides speech aligned to translated text. Speech is recorded through Skype for English, German and French (Federmann and Lewis, 2016) . But this corpus is again rather small (less than 8h per language). Paper contributions. Our objective is to provide a large corpus for direct speech translation evaluation which is an order of magnitude bigger than existing corpora described in the introduction. For this, we propose to enrich an existing (monolingual) corpus based on read audiobooks called LibriSpeech. The approach is straightforward: we align ebooks in a foreign language (French) with the English utterances of LibriSpeech. This results in 236h of English speech automatically aligned to French translations at the utterance level 1 . Outline. This paper is organized as following: after presenting our starting point (Librispeech) in section 2., we describe how we aligned foreign translations to the speech corpus in section 3.. Section 4. describes our evaluation of a subset of the corpus (quality of the automatically obtained alignments). Finally, section 5. concludes this work and gives some perspectives.",
"cite_spans": [
{
"start": 92,
"end": 115,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 183,
"end": 204,
"text": "(B\u00e9rard et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 205,
"end": 223,
"text": "Weiss et al., 2017",
"ref_id": "BIBREF16"
},
{
"start": 514,
"end": 536,
"text": "(Blachon et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 537,
"end": 555,
"text": "Adda et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 556,
"end": 588,
"text": "Anastasopoulos and Chiang, 2017)",
"ref_id": "BIBREF1"
},
{
"start": 757,
"end": 764,
"text": "(>100h)",
"ref_id": null
},
{
"start": 1123,
"end": 1142,
"text": "(Post et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 1392,
"end": 1419,
"text": "(Federmann and Lewis, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Our starting point is LibriSpeech corpus used for Automatic Speech Recognition (ASR). It is a large scale corpus which contains approximatively 1000 hours of speech aligned with their transcriptions (Panayotov et al., 2015) . The read audio book recordings derive from a project based on collaborative effort: LibriVox. The speech recordings are based on public domain books available on Gutenberg Project 2 and are distributed with LibriSpeech as well as the original recordings. We start from this corpus 3 because it has been widely used 1 Our dataset is available at https:// persyval-platform.univ-grenoble-alpes.fr/ DS91/detaildataset 2 https://www.gutenberg.org/ 3 Another dataset could have been used: TED Talks -see https://www.ted.com -but we considered it was be better to start with a read speech corpus for evaluating End-2-End speech translation.",
"cite_spans": [
{
"start": 199,
"end": 223,
"text": "(Panayotov et al., 2015)",
"ref_id": "BIBREF12"
},
{
"start": 541,
"end": 542,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Starting Point: Librispeech Corpus",
"sec_num": "2."
},
{
"text": "in ASR and because we believe it is possible to find the text translations for a large subset of the read audiobooks. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Starting Point: Librispeech Corpus",
"sec_num": "2."
},
{
"text": "The main steps of our process are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "\u2022 Collect e-books in foreign language corresponding to English books read in Librispeech (section 3.2.),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "\u2022 Extract chapters from these foreign books, corresponding to read chapters in Librispeech (section 3.3.),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "\u2022 Perform bilingual text alignement from comparable chapters (section 3.4.),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "\u2022 Realign speech signal with text translations obtained (section 3.5.). These different steps are described in the next subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "LibriSpeech corpus is composed of 5831 chapters (from 1568 books) aligned with their transcriptions. We used the given metadata to search e-books in foreign language (French) corresponding to English books read in Librispeech. Firstly, we used DBPedia (Auer et al., 2007) in order to (automatically) obtain title translations. Secondly, we used a public domain index of French e-books 4 to find Web links matching titles we found. Then, we finished this process for the entire LibriSpeech corpus by manually searching for French novels in different public domain resources. Overall, we collected 1818 chapters (from 315 books) in French to be aligned with Librispeech. Some of the public domain resources that we used are: Gutenberg Project 5 , Wikisource 6 , Gallica 7 , Google Books 8 , BEQ 9 , UQAC 10 . Audiobooks available in LibriSpeech are of different literary genres: most of them are novels, however there are also poems, fables, treaties, plays, religious texts, etc. Belonging to the public domain, most of the texts are old and not available publicly in foreign language. Therefore, the novels that were collected in foreign language are mostly novels from world's classics. As few of them are ancient texts, some translations are in old French.",
"cite_spans": [
{
"start": 252,
"end": 271,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Foreign Novels",
"sec_num": "3.2."
},
{
"text": "LibriSpeech transcriptions are provided for each chapter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chapters Extraction",
"sec_num": "3.3."
},
{
"text": "As the readers only read a short period of time 11 , transcriptions may correspond to incomplete chapters. For the same reason, books are not read entirely. Therefore, in order to obtain an alignment at the sentence level, a first step was to decompose English and French language books into chapters. This step was achieved by a semi-automatic process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chapters Extraction",
"sec_num": "3.3."
},
{
"text": "After converting books to text format (both English and French), regular expressions were used to identify chapter transitions. Then, each French chapter was extracted and aligned to its counterpart in English. After manual verification of all chapters, we obtained 1423 usable chapters (from 247 books).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chapters Extraction",
"sec_num": "3.3."
},
{
"text": "The 1423 parallel chapters establish the comparable corpus from which we extracted bilingual sentences. This was done using an off-the-shelf bilingual sentence aligner called hunAlign (Varga et al., 2007) . HunAlign takes as input a comparable (not sentence-aligned) corpus and outputs a sequence of bilingual sentence pairs. It combines (Gale-Church) sentence-length information as well as dictionarybased alignment methods. Initial dictionary available for alignment was the default French-English (40k entries) lexicon created for LF Aligner 12 (wrapper for hunAlign created by Andras Farkas). We enriched this dictionary by adding entries from other open source bilingual dictionaries. Different dictionaries (woaifayu, apertium, freedict, quick) from a language learning resource were gathered in various formats and adapted to hunAlign dictionary format 13 . We finally obtained and used a dictionary of 128,000 unique entries. In order to improve the quality of sentence level alignments, data had to be pre-processed. For English and",
"cite_spans": [
{
"start": 184,
"end": 204,
"text": "(Varga et al., 2007)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Text Alignement",
"sec_num": "3.4."
},
{
"text": "French, our extracted chapters were cleaned with regular expressions. Then, we used Python NLTK (Bird, 2006) sentence split to detect sentence boundaries in the corpora. Furthermore, the bitexts were stemmed (removing suffixes to reduce data sparsity). Finally, parallel sentences found were brought back to their initial form with reverse stemming. This last step was done using Google's dif f \u2212 patch \u2212 match library (Fraser, 2012) .",
"cite_spans": [
{
"start": 96,
"end": 108,
"text": "(Bird, 2006)",
"ref_id": "BIBREF6"
},
{
"start": 419,
"end": 433,
"text": "(Fraser, 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual Text Alignement",
"sec_num": "3.4."
},
{
"text": "Oh, I beg your pardon! Oh! je vous demande bien pardon! A lane was forthwith opened through the crowd of spectators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Sentence French Sentence",
"sec_num": null
},
{
"text": "Un chemin fut alors ouvert parmi la foule des spectateurs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Sentence French Sentence",
"sec_num": null
},
{
"text": "No, \"said Catherine,\" he is not here; I cannot see him anywhere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Sentence French Sentence",
"sec_num": null
},
{
"text": "-Non, dit Catherine, il n'est pas ici. Jamais je ne parviens a le rencontrer. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English Sentence French Sentence",
"sec_num": null
},
{
"text": "In order to associate parallel sentences to speech signal transcriptions, realignment of speech segments of Lib-riSpeech was necessary. This realignment is a two step process: first, we forced aligned Librispeech English transcripts to match English sentences obtained in the previous stage ; secondly, we resegmented the speech signal according to new sentence splits. For the first step, we used mweralign, a tool for realigning texts in a same language but with a different sentence tokenization (Matusov et al., 2005) . We applied mweralign to realign our speech transcriptions in English to the English sentences of our bilingual corpus obtained in section 3.4.. The outcome of this first step is a new sentence segmentation for our English transcriptions that are now correctly aligned to our French translations. The second step was to resegment the speech signals to match them to the new sentence segmentation. We did that by:",
"cite_spans": [
{
"start": 499,
"end": 521,
"text": "(Matusov et al., 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Realigning Speech Signal with Text Translations",
"sec_num": "3.5."
},
{
"text": "\u2022 creating a big wav file by concatenating speech segments for each chapter,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Realigning Speech Signal with Text Translations",
"sec_num": "3.5."
},
{
"text": "\u2022 re-aligning the large speech wav signal to the transcripts using gentle 14 toolkit, an off-the-shelf English forced-aligner based on Kaldi ASR toolkit (Povey et al., 2011) ,",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Realigning Speech Signal with Text Translations",
"sec_num": "3.5."
},
{
"text": "\u2022 re-segmenting speech according to the desired sentence split. Now that we have obtained a multimodal alignment between (English) speech signals and (French) translations, we want to evaluate its quality. At this point, the only score available is the confidence score given by hunalign indicating confidence for aligned sentences. One goal of this human evaluation, that can only be made on a corpus subset, is to see if hunalign score has a good correlation with human judgements. 50 sentences from 4 different chapters have been chosen for evaluation. These chapters were chosen according to their average alignment scores (from hunalign). We chose two chapters that were near the mean of overall alignment scores (hypothesized medium quality alignments), one chapter which was above the mean score (hypothesized good quality alignment) and a final chapter below mean score (hypothesized bad quality alignment). These sentences were evaluated by three annotators. We established a scale from 1 to 3 to judge matching quality between English speech and English transcriptions. This 3-step scale is precise enough because few errors were found in speech alignments. We established a scale from 1 to 5 to judge quality between bilingual text alignments. Overall, 200 sentences were evaluated (on both scales) by 3 annotators. We give, as example below, sentences for each mark (1-5) for human evaluation of bilingual alignments. Two different dimensions are evaluated at the same time: the accuracy of alignment (an alignment can be wrong, partial or correct) and the fact that translational equivalence is compositional and may be isolated from the current context. Table 4 . reports our human evaluations for the 4 chapters. The first thing that we can notice is that the alignment quality is higher for chapters with higher confidence scores. The first evaluation (speech alignement ; scale 1-3) shows an average score of 2.89/3 which confirms that our resegmentation of speech signals worked correctly. The second evaluation (bilingual alignment ; scale 1-5) shows an average score of 3.84/5. Some sentences were found uncorrectly aligned but overall, the alignment quality can be considered as correct. The main reason why the average alignment score varies between chapters is reflected by the translations compositionnality. Also, the dictionary that we used for bilingual alignments is inadequate for old texts and results in lower overall confidence scores. We also computed automatic correspondence scores obtained with a cross-language textual similarity detection between transcriptions and their translations (Ferrero et al., 2016) . Our idea was to add another automatic score in addition to hunalign score. We computed the correlation between human evaluation scores and hunalign scores and obtained a correlation of 0.41. The same correlation was obtained between human evaluation scores and those obtained automatically with method of (Ferrero et al., 2016) . This shows that automatic alignment scores are reasonably correlated with human judgments and could be used to extract a subset of the best alignments by ranking them according to hunalign score for instance.",
"cite_spans": [
{
"start": 2623,
"end": 2645,
"text": "(Ferrero et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 2953,
"end": 2975,
"text": "(Ferrero et al., 2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1668,
"end": 1675,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Realigning Speech Signal with Text Translations",
"sec_num": "3.5."
},
{
"text": "We have presented a large corpus (236h) which is an augmentation of Librispeech in order to provide a bilingual speech-text corpus for direct (end-2-end) speech translation experiments. The methodology described here could be used in order to add other languages than French (German, Spanish, etc.) to our augmented Librispeech. The current corpus contains several ancient texts, so it would also be interesting to extend it to other kinds of corpora: different speaking styles (not only read speech), more contemporary texts, etc. For direct speech translation experiments, preliminary experiments have been done recently and will be presented at next ICASSP 2018 conference (B\u00e9rard et al., 2018) . Our online repository 15 provides a data split for speech translation experiments and results show that it is possible to train compact and efficient end-to-end speech translation models in this setup, but the dataset is challenging (BLEU score around 15 for direct speech translation task -more details in (B\u00e9rard et al., 2018) ).",
"cite_spans": [
{
"start": 676,
"end": 697,
"text": "(B\u00e9rard et al., 2018)",
"ref_id": "BIBREF5"
},
{
"start": 1007,
"end": 1028,
"text": "(B\u00e9rard et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "https://www.noslivres.net/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.gutenberg.org/ 6 http://www.wikisource.org/ 7 http://gallica.bnf.fr/ 8 http://books.google.com 9 http://beq.ebooksgratuits.com 10 http://www.uqac.ca/ 11 One goal of Librispeech was to have as many speakers as possible 12 https://sourceforge.net/projects/ aligner/ 13 https://polyglotte.tuxfamily.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "see https://persyval-platform. univ-grenoble-alpes.fr/DS91/detaildataset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Breaking the unwritten language barrier: The Bulb project",
"authors": [
{
"first": "G",
"middle": [],
"last": "Adda",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "St\u00fccker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Ambouroue",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Blachon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Bonneau-Maynard",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Godard",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Hamlaoui",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Idiatov",
"suffix": ""
},
{
"first": "G.-N",
"middle": [],
"last": "Kouarata",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lamel",
"suffix": ""
},
{
"first": "E.-M",
"middle": [],
"last": "Makasso",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rialland",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Van De Velde",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yvon",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Zerbian",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of SLTU (Spoken Language Technologies for Under-Resourced Languages)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adda, G., St\u00fccker, S., Adda-Decker, M., Ambouroue, O., Besacier, L., Blachon, D., Bonneau-Maynard, H., Go- dard, P., Hamlaoui, F., Idiatov, D., Kouarata, G.-N., Lamel, L., Makasso, E.-M., Rialland, A., Van de Velde, M., Yvon, F., and Zerbian, S. (2016). Breaking the un- written language barrier: The Bulb project. In Proceed- ings of SLTU (Spoken Language Technologies for Under- Resourced Languages), Yogyakarta, Indonesia.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A case study on using speech-to-translation alignments for language documentation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.04372"
]
},
"num": null,
"urls": [],
"raw_text": "Anastasopoulos, A. and Chiang, D. (2017). A case study on using speech-to-translation alignments for language documentation. arXiv preprint arXiv:1702.04372.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dbpedia: A nucleus for a web of open data. The semantic web",
"authors": [
{
"first": "S",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Cyganiak, R., and Ives, Z. (2007). Dbpedia: A nucleus for a web of open data. The semantic web, pages 722-735.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural ma- chine translation by jointly learning to align and trans- late. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Listen and translate: A proof of concept for endto-end speech-to-text translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
}
],
"year": 2016,
"venue": "NIPS workshop on End-to-end Learning for Speech and Audio Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B\u00e9rard, A., Pietquin, O., Servan, C., and Besacier, L. (2016). Listen and translate: A proof of concept for end- to-end speech-to-text translation. In NIPS workshop on End-to-end Learning for Speech and Audio Processing.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "End-to-end automatic speech translation of audiobooks",
"authors": [
{
"first": "A",
"middle": [],
"last": "B\u00e9rard",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "A",
"middle": [
"C"
],
"last": "Kocabiyikoglu",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Pietquin",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B\u00e9rard, A., Besacier, L., Kocabiyikoglu, A. C., and Pietquin, O. (2018). End-to-end automatic speech trans- lation of audiobooks. In Accepted to Acoustics, Speech and Signal Processing (ICASSP), 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Pro- cessing. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Nltk: the natural language toolkit",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Interactive presentation sessions",
"volume": "",
"issue": "",
"pages": "69--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bird, S. (2006). Nltk: the natural language toolkit. In Pro- ceedings of the COLING/ACL on Interactive presenta- tion sessions, pages 69-72. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parallel speech collection for under-resourced language studies using the LIG-Aikuma mobile device app",
"authors": [
{
"first": "D",
"middle": [],
"last": "Blachon",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gauthier",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "G.-N",
"middle": [],
"last": "Kouarata",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Adda-Decker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rialland",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of SLTU (Spoken Language Technologies for Under-Resourced Languages)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blachon, D., Gauthier, E., Besacier, L., Kouarata, G.-N., Adda-Decker, M., and Rialland, A. (2016). Parallel speech collection for under-resourced language studies using the LIG-Aikuma mobile device app. In Proceed- ings of SLTU (Spoken Language Technologies for Under- Resourced Languages), Yogyakarta, Indonesia, May.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Microsoft speech language translation (mslt) corpus: The iwslt 2016 release for english",
"authors": [
{
"first": "C",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "W",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federmann, C. and Lewis, W. D. (2016). Microsoft speech language translation (mslt) corpus: The iwslt 2016 re- lease for english, french and german.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A multilingual, multi-style and multi-granularity dataset for cross-language textual similarity detection",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ferrero",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Agnes",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schwab",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferrero, J., Agnes, F., Besacier, L., and Schwab, D. (2016). A multilingual, multi-style and multi-granularity dataset for cross-language textual similarity detection.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "google-diff-match-patch-diff, match and patch libraries for plain text",
"authors": [
{
"first": "N",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fraser, N. (2012). google-diff-match-patch-diff, match and patch libraries for plain text.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Evaluating machine translation output with automatic sentence segmentation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matusov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Leusch",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "International Workshop on Spoken Language Translation (IWSLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matusov, E., Leusch, G., Bender, O., and Ney, H. (2005). Evaluating machine translation output with automatic sentence segmentation. In International Workshop on Spoken Language Translation (IWSLT) 2005.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Librispeech: an asr corpus based on public domain audio books",
"authors": [
{
"first": "V",
"middle": [],
"last": "Panayotov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2015,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "5206--5210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. (2015). Librispeech: an asr corpus based on public do- main audio books. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2015 IEEE International Conference on, pages 5206-5210. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improved speechto-text translation with the fisher and callhome spanishenglish speech translation corpus",
"authors": [
{
"first": "M",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Post, M., Kumar, G., Lopez, A., Karakos, D., Callison- Burch, C., and Khudanpur, S. (2013). Improved speech- to-text translation with the fisher and callhome spanish- english speech translation corpus.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 workshop on automatic speech recognition and understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., et al. (2011). The kaldi speech recog- nition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding, number EPFL- CONF-192584. IEEE Signal Processing Society.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parallel corpora for medium density languages",
"authors": [
{
"first": "D",
"middle": [],
"last": "Varga",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hal\u00e1csy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kornai",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Nagy",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "N\u00e9meth",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Tr\u00f3n",
"suffix": ""
}
],
"year": 2007,
"venue": "Amsterdam Studies in the Theory and History of Linguistic Science Series",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varga, D., Hal\u00e1csy, P., Kornai, A., Nagy, V., N\u00e9meth, L., and Tr\u00f3n, V. (2007). Parallel corpora for medium den- sity languages. Amsterdam Studies in the Theory and History of Linguistic Science Series 4, 292:247.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sequence-to-sequence models can directly transcribe foreign speech",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Weiss",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.08581"
]
},
"num": null,
"urls": [],
"raw_text": "Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y., and Chen, Z. (2017). Sequence-to-sequence models can directly tran- scribe foreign speech. arXiv preprint arXiv:1703.08581.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table><tr><td>Recordings are segmented and put into different subsets of</td></tr><tr><td>the corpus according to their quality (better quality speech</td></tr><tr><td>segments are put in the clean part). Note that in order to</td></tr><tr><td>obtain a balanced corpus with a large number of speakers,</td></tr><tr><td>each speaker only read a small portion of a book (8-10 min-</td></tr><tr><td>utes for dev and test, 25-30 minutes</td></tr></table>",
"num": null,
"html": null,
"text": "Details on LibriSpeech corpusTable 1. gives details on Librispeech as well as data split.",
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td>: Examples of parallel sentences obtained from</td></tr><tr><td>comparable corpora made up of aligned book chapters</td></tr><tr><td>Table 2. shows examples of 3 bilingual sentences obtained</td></tr><tr><td>from 3 different chapters.</td></tr></table>",
"num": null,
"html": null,
"text": "",
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td colspan=\"3\">sentence Chapters Books Duration (h) Total Segments</td></tr><tr><td>1408</td><td>247~236h</td><td>131395</td></tr></table>",
"num": null,
"html": null,
"text": "pair, we also added En-Fr machine translation output of our English transcripts (Google Translate). So we have 2 French translations in the end (a correct one from automatic alignement ; a noisy one from MT).",
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>: Statistics of the final multimodal and bilingual cor-</td></tr><tr><td>pus obtained (English speech aligned to French text)</td></tr><tr><td>4. Human Evaluation of a Corpus Subset</td></tr><tr><td>4.1. Protocol</td></tr></table>",
"num": null,
"html": null,
"text": "",
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>: Results of human evaluation by 3 annotators.</td></tr><tr><td>Kappa's Cohen (weighted) for inter annotator agreement for textual alignment is 0.76</td></tr><tr><td>French: Mais il para\u00eet que tu pr\u00e9f\u00e8res\u00eatre cour-</td></tr><tr><td>tis\u00e9e avec l'arc et la hache, plut\u00f4t qu'avec des</td></tr><tr><td>phrases polies et avec la langue de la courtoisie.</td></tr><tr><td>\u2022 3. Partial alignment with compositional translation</td></tr><tr><td>and additional or missing information</td></tr><tr><td>English: SO AT LAST BEGAN THE EVENING</td></tr><tr><td>PAPER AT LA FORCE</td></tr><tr><td>French: C'est ainsi qu'enfin d\u00e9buta le journal du</td></tr><tr><td>soir\u00e0 la Force, le jour o\u00f9 la pauvre Lucie avait vu</td></tr><tr><td>danser la carmagnole.</td></tr><tr><td>\u2022 4. Correct alignment with compositional translation</td></tr><tr><td>and few additional or missing information</td></tr><tr><td>English: THE NIGHT WAS DARK AND A</td></tr><tr><td>COLD WIND BLEW</td></tr><tr><td>French: La nuit\u00e9tait sombre; le vent\u00e2pre et froid</td></tr><tr><td>chassait devant lui avec rage les nuages rapides.</td></tr><tr><td>\u2022 5. Correct alignment and fully compositional transla-</td></tr><tr><td>tion</td></tr><tr><td>English: WHAT IS A CAUCUS RACE</td></tr><tr><td>French: Qu'est-ce qu'une course cocasse?</td></tr><tr><td>4.2. Results</td></tr></table>",
"num": null,
"html": null,
"text": "",
"type_str": "table"
}
}
}
}