ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:45:40.545757Z"
},
"title": "ELITR Multilingual Live Subtitling: Demo and Strategy",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sangeet",
"middle": [],
"last": "Sagar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Otakar",
"middle": [],
"last": "Smr\u017e",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Kratochv\u00edl",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Peter",
"middle": [],
"last": "Pol\u00e1k",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Mahmoudi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Rishu",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dario",
"middle": [],
"last": "Franceschini",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chiara",
"middle": [],
"last": "Canton",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ivan",
"middle": [
"Simonini"
],
"last": "Pervoice",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Felix",
"middle": [],
"last": "Schneider",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an automatic speech translation system aimed at live subtitling of conference presentations. We describe the overall architecture and key processing components. More importantly, we explain our strategy for building a complex system for endusers from numerous individual components, each of which has been tested only in laboratory conditions. The system is a working prototype that is routinely tested in recognizing English, Czech, and German speech and presenting it translated simultaneously into 42 target languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an automatic speech translation system aimed at live subtitling of conference presentations. We describe the overall architecture and key processing components. More importantly, we explain our strategy for building a complex system for endusers from numerous individual components, each of which has been tested only in laboratory conditions. The system is a working prototype that is routinely tested in recognizing English, Czech, and German speech and presenting it translated simultaneously into 42 target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the tremendous gains observed recently in automatic speech recognition (ASR) and machine translation (MT) quality, including methods of joint learning of both of the tasks, the goal of a practically usable simultaneous spoken language translation (SLT 1 ) system is getting closer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce the SLT system developed in the EU project ELITR (European Live Translator 2 ) which aims at a distinct setting: real-time speech translation into many target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the current globalized world, meetings with participants from a very wide spectrum of nations are 1 We use the term SLT to refer primarily to simultaneous systems, although off-line spoken language systems can also fall under the same acronym.",
"cite_spans": [
{
"start": 101,
"end": 102,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "2 http://elitr.eu/ common. Many multinational organizations, public or private, regularly run congresses and conferences where attendees do not have any language in common. Interpretation is a must at such meetings and the cost of interpretation services consumes a considerable portion of the budget. The number of provided languages is then kept as low as possible, even in cases when some of the attendees are not sufficiently fluent in any of them. We primarily focus on the setting of such multinational congresses where one source speech needs to be translated into many target languages. While we are aware of the quality limitations of speech recognition and machine translation, we strongly believe that the technology has reached the level where it is becoming practically usable and related systems confirm that belief, see Section 3 below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Even if the automatic translation of recognized speech is not perfect, it can serve as a valuable supportive material. For instance, a Czech attendee may have a fair knowledge of English and French, but may easily get lost due to pronunciation difficulties to follow, gaps in his or her grammar knowledge, general vocabulary or specific terminology. Following live subtitles in mother tongue while listening to the foreign language could be of great help. Some level of errors in the subtitles is acceptable if the subtitles are sufficiently simultaneous. Our main goal is thus gist interpretation, i.e. live supportive translation of speech into text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Within the ELITR project, we focus on ASR for English, Czech, German, French, Spanish and later Russian and Italian, and targetting the set of 43 languages spoken in member countries of EU-ROSAI, the association of supreme audit institu-tions of the EU and nearby countries. Experimentally, we include also other languages based on available systems among the research partners in our project, e.g. Hindi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "The scientific motivation for our efforts is to find an approach that allows to assemble laboratory system components to a practically usable product and to document the problems on this journey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Live spoken language translation has been continuously studied for decades, see e.g. Osterholtz et al. (1992) ; F\u00fcgen et al. (2008) ; Bangalore et al. (2012) . Recent systems differ in whether they provide revisions to their previous output (M\u00fcller et al., 2016; Niehues et al., 2016; Dessloch et al., 2018; Arivazhagan et al., 2020) , or whether they only append output tokens (Grissom II et al., 2014; Gu et al., 2017; Arivazhagan et al., 2019; Press and Smith, 2018; . M\u00fcller et al. (2016) were probably the first to allow output revision when they find a better translation. Zenkel et al. (2018) released a simpler setup as an open-source toolkit consisting of a neural speech recognition system, a sentence segmentation system, and an attention-based translation system providing also some pre-trained models for their tasks. (Zenkel et al., 2018) evaluated only the quality of the output translations using BLEU and WER metrics. proposed a new approach with a delay-based heuristic. The model decides to read more input (or wait for it) or write the translation to the output. introduced a simple wait-k heuristic: output is emitted after k words of input. Both works are limited to simultaneous translation, i.e. they start from text and only simulate the speech-like input by processing input word by word.",
"cite_spans": [
{
"start": 85,
"end": 109,
"text": "Osterholtz et al. (1992)",
"ref_id": "BIBREF21"
},
{
"start": 112,
"end": 131,
"text": "F\u00fcgen et al. (2008)",
"ref_id": "BIBREF7"
},
{
"start": 134,
"end": 157,
"text": "Bangalore et al. (2012)",
"ref_id": "BIBREF3"
},
{
"start": 241,
"end": 262,
"text": "(M\u00fcller et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 263,
"end": 284,
"text": "Niehues et al., 2016;",
"ref_id": "BIBREF19"
},
{
"start": 285,
"end": 307,
"text": "Dessloch et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 308,
"end": 333,
"text": "Arivazhagan et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 378,
"end": 403,
"text": "(Grissom II et al., 2014;",
"ref_id": "BIBREF8"
},
{
"start": 404,
"end": 420,
"text": "Gu et al., 2017;",
"ref_id": "BIBREF9"
},
{
"start": 421,
"end": 446,
"text": "Arivazhagan et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 447,
"end": 469,
"text": "Press and Smith, 2018;",
"ref_id": "BIBREF23"
},
{
"start": 472,
"end": 492,
"text": "M\u00fcller et al. (2016)",
"ref_id": "BIBREF19"
},
{
"start": 579,
"end": 599,
"text": "Zenkel et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 831,
"end": 852,
"text": "(Zenkel et al., 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Systems",
"sec_num": "3"
},
{
"text": "Arivazhagan et al. (2020) combine industrygrade ASR and MT and allow output revisions by re-translating the source from scratch as it grows to decrease the latency, providing acceptable translation quality at the price of a higher number of text revisions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Systems",
"sec_num": "3"
},
{
"text": "We always strive for the best performance for each considered language pair. With the perpetual com-petition in ASR and MT research, it is not surprising that there is no universally best solution. The interplay of available data, underlying method, the actual implementation as well as its adaptability to the domain of interest requires different choices for different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELITR Flexible Architecture",
"sec_num": "4"
},
{
"text": "Furthermore, the top-performing components are often available only at universities or research labs, as more or less stable research prototypes. Releasing any such system, let alone their combination so that they could be easily deployed by lay users is surely possible, but it would require considerable additional implementation resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ELITR Flexible Architecture",
"sec_num": "4"
},
{
"text": "The ELITR architecture (Franceschini et al., 2020) tackles this integration problem by means of a distributed connection-based client-server application. Research labs provide their components by connecting to a central point (the \"mediator\") which in turn uses these \"workers\" to satisfy users' stream processing requests. A technical benefit is that worker connection is issued from the secured networks of the labs so it usually does not run into firewall issues.",
"cite_spans": [
{
"start": 23,
"end": 50,
"text": "(Franceschini et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ELITR Flexible Architecture",
"sec_num": "4"
},
{
"text": "All our workers, except recent online sequenceto-sequence ASRs, have been described in our IWSLT 2020 shared task submission . We briefly summarize them in following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Components",
"sec_num": "5"
},
{
"text": "All our ASR systems provide online processing with low latency and hypotheses updates, as in KIT Lecture Translator (M\u00fcller et al., 2016) . We use the hybrid ASR models based on Janus from KIT Lecture Translator, for German and English, as well as recent neural sequence-to-sequence ASR models trained on the same data . For Czech ASR, we use a Kaldi hybrid model trained on a Corpus of Czech Parliament Plenary Hearings (Kratochv\u00edl et al., 2019) . Czech sequence-to-sequence ASR is a work in progress.",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "(M\u00fcller et al., 2016)",
"ref_id": "BIBREF19"
},
{
"start": 421,
"end": 446,
"text": "(Kratochv\u00edl et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ASR Systems in ELITR",
"sec_num": "5.1"
},
{
"text": "We use bilingual NMT models for some high resource and well-studied language pairs e.g. for English-Czech (Popel et al., 2019; Wetesko et al., 2019) . For other targets, we use multi-target models, e.g. an English-centric universal model for 42 languages (Johnson et al., 2017) . The models are mostly Transformers (Vaswani et al., 2017) but we improve their performance in massively multilingual setting by extra depth (Zhang et al., 2020) .",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "(Popel et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 127,
"end": 148,
"text": "Wetesko et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 255,
"end": 277,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 315,
"end": 337,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 420,
"end": 440,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MT Systems in ELITR",
"sec_num": "5.2"
},
{
"text": "Connecting ASR and MT systems is not straightforward because MT systems assume input in the form of complete sentences. We follow the strategy of Niehues et al. (2016) , first inserting punctuation into the stream of tokens coming from ASR (Tilk and Alum\u00e4e, 2016) , breaking it up at full stops and sending individual sentences to MT, either as unfinished sentence prefixes, or complete sentences. We are using re-translation, as ASR or punctuation updates are received. Currently, the main problem is that punctuation prediction does not have access to the sound any more, so intonation cannot be considered. Another problem is the information structure of translated sentences, where MT systems tend to \"normalize\" word order. The loss of topicalization reduces understandability of the stream of uttered sentences.",
"cite_spans": [
{
"start": 146,
"end": 167,
"text": "Niehues et al. (2016)",
"ref_id": "BIBREF19"
},
{
"start": 240,
"end": 263,
"text": "(Tilk and Alum\u00e4e, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Interplay of ASR and MT",
"sec_num": "5.3"
},
{
"text": "For the future, we consider three approaches: (1) training MT on sentence chunks, (2) including sound input in punctuation prediction, or (3) end-to-end neural SLT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interplay of ASR and MT",
"sec_num": "5.3"
},
{
"text": "We evaluate our systems in multiple ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "\u2022 The individual components are evaluated in isolation during deployment, and on a comparable test set. compared with baseline by the MT quality. \u2022 English to Czech and German simultaneous translation of non-native speech was evaluated on a shared task at IWSLT 2020 (Ansari et al., 2020). We validated our candidate systems, and submitted the best one as . The results showed that the speech recognition of the non-native speech in the test set was problematic, and resulted to inadequate translations. However, the systems were not yet adapted to non-natives or for the domain. It is a challenge for future work. It can be achieved by speaker adaptation of the ASR from a small sample of the speaker, by multi-lingual ASR, and by collecting non-native speech training data, as AMI corpus. \u2022 We regurarly test our system end-to-end on linguistic seminars in Czech or English. The participants are Czech or English speakers and do not need any assistance with the language, so we can not receive relevant feedback about adequacy and fluency. However, we test our system in end-to-end fashion and face engineering problems and technical issues on all layers from sound acquisition through network connections, worker configuration to subtitle presentation. \u2022 We are currently running a user study with non-German speakers watching German videos with our online subtitles, see Section 7.1. We aim to measure the comprehension loss caused by different subtitling options, latency or flicker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "For comparability across our project partners but also across external research labs, we publicly released a tool for evaluation, SLTev 3 (Ansari et al., 2021) and a test set. 4 The results of our currently best candidates on the testset are in Table 1. It is important to realize that the evaluation for quality, latency and stability on a speech-to-text test set in lab conditions is necessary, but not sufficient for assessing the practical usability of the system. Practical usability has to include the presentation layer (Section 7) and tests in live sessions or rigorously controlled conditions. Figure 1 : A screenshot of subtitle view from a presentation given in Czech (last row), automatically transcribed and translated to English (first row) and then from English into several other languages. The various processing and network delays lead to slightly different timing of each of the languages.",
"cite_spans": [
{
"start": 176,
"end": 177,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 245,
"end": 253,
"text": "Table 1.",
"ref_id": "TABREF1"
},
{
"start": 603,
"end": 611,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6"
},
{
"text": "The last step in an SLT system is the delivery of the translated content to the user. Our goal stops at the textual representation, i.e. we do not include speech synthesis and delivery of the sound, which would bring yet another set of design decisions and open problems, see e.g. Zheng et al. (2020) .",
"cite_spans": [
{
"start": 281,
"end": 300,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Presentation Techniques",
"sec_num": "7"
},
{
"text": "We experiment with two different views for our text output, both implemented as web applications. The \"subtitle view\" is optimized toward minimal use of screen space. Only two lines of text are available which leaves room either for e.g. a streamed video of the session or the slides, or for many languages displayed at once, if the screen is intended for a multi-lingual audience. The \"paragraph view\" provides more textual context to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Presentation Techniques",
"sec_num": "7"
},
{
"text": "The subtitle view offers a simple interface with a HLS stream of the video or slides and one or more subtitles streams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "Section 7.1 presents one screenshot of this view, selected from a screencast. Instead of presenting the video, we use the screen space to show seven target languages, in addition to the live transcript of the source Czech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "We are probably the first to combine retranslation strategy with the presentation in such limited space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "To limit text flicker as retranslations are arriving, we had to introduce a critical component after the MT output called Sub-titler . The subtitler allows us to choose the level of updates, trading simultaneity for stability. A user study on the impact of this choice on comprehensibility is currently running. We believe that the ideal choice will depend also on the users' knowledge of the source and target languages and their reading speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "Even if the flicker is avoided, there remains the main drawback of the subtitle view, the limited context. Both ASR and MT suffer from natural errors. Following the output of ASR (subtitles of the speakers' language) is easier, the erroneous hypotheses still somehow resemble the original sound, so the user can recover from recognition errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "The output of MT causes a substantially bigger challenge for the user because the sentences are mostly rendered as fully fluent but containing unexpected words or information structure. With only two lines of text available, the user does not see sufficient number of words to let the brain \"make up\" or reconstruct the original meaning from pieces. The short-term memory of recently processed text does not seem to be sufficient for this type recovery, while seeing the words in larger context gives the user a better chance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtitle View",
"sec_num": "7.1"
},
{
"text": "We created the paragraph view primarily to improve the chances of recovery from translation errors. The added benefit is a clearer indication of which sentences are finished and which may still Figure 2 : Sample screenshot from the paragraph view of simultaneous translation output on a live discussion of THEaiTRE project. The talk was given in Czech, interpreted into English by a human interpreter, automatically recognized (the leftmost EN column) and translated into 41 languages. Sentence indices correspond to each other across languages in all columns. Sentences in black are \"stable\", no update will arrive. Sentences in dark gray and with yellow index number are tentative, the segmentation (and thus translation) still may change. The last sentence (light gray) is still being uttered and is thus highly unstable.",
"cite_spans": [],
"ref_spans": [
{
"start": 194,
"end": 202,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paragraph View",
"sec_num": "7.2"
},
{
"text": "change. Without any settings, users can simply decide if they want to read the less stable gray output, or rather wait for the stable segments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph View",
"sec_num": "7.2"
},
{
"text": "The view is illustrated in Figure 2 , with Czech as source and two more languages shown. More than three languages can be presented as well but they generally do not fit. The scrolling of the languages is not fully parallel by our design decision to prefer contiguous columns within each language over tabular synchronous presentation. One important aspect is however synchronized, and that is the stable \"level\" for finalized sentences: the completed text (shown in black) is aligned at the bottom across languages while the unstable hypotheses flicker below the level, varying in their length as needed.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Paragraph View",
"sec_num": "7.2"
},
{
"text": "A drawback of this interface is that all errors such as laughable or obscene words in MT output remain on screen for a long time, needlessly distracting the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paragraph View",
"sec_num": "7.2"
},
{
"text": "We presented a complex system for live subtitling of conference speech into many target languages, composed of research prototype components but still serving in close-to-production setting. New and updated models and other components can be easily plugged in and tested in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "As of now, we are at a good starting point for gradual model improvement and field tests. One of them is very likely to be the META-FORUM 2021 but we are also searching for suitable events with more than one official communication language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "Demonstration videos from past sessions can be found in the blogposts at https://elitr.eu/ blog/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://github.com/ELITR/SLTev 4 https://github.com/ELITR/ elitr-testset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "for Computational Linguistics, Kyiv, Ukraine. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Monotonic infinite lookback attention for simultaneous machine translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Chung-Cheng",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Ruoming",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1313--1323",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1126"
]
},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, and Colin Raffel. 2019. Monotonic infinite lookback attention for simulta- neous machine translation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1313-1323, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Re-translation versus streaming for simultaneous translation",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th International Conference on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "220--227",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwslt-1.27"
]
},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, and George Foster. 2020. Re-translation versus streaming for simultaneous translation. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 220-227, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Real-time incremental speech-tospeech translation of dialogs",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Kumar Rangarajan",
"suffix": ""
},
{
"first": "Prakash",
"middle": [],
"last": "Sridhar",
"suffix": ""
},
{
"first": "Ladan",
"middle": [],
"last": "Kolan",
"suffix": ""
},
{
"first": "Aura",
"middle": [],
"last": "Golipour",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jimenez",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "437--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Bangalore, Vivek Kumar Rangarajan Srid- har, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-to- speech translation of dialogs. In Proceedings of the 2012 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 437- 445, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "ELITR: European live translator",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "Sangeet",
"middle": [],
"last": "Sagar",
"suffix": ""
},
{
"first": "Otakar",
"middle": [],
"last": "Smr\u017e",
"suffix": ""
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Kratochv\u00edl",
"suffix": ""
},
{
"first": "Ebrahim",
"middle": [],
"last": "Ansari",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Franceschini",
"suffix": ""
},
{
"first": "Chiara",
"middle": [],
"last": "Canton",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Simonini",
"suffix": ""
},
{
"first": "Thai-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fccker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "463--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Dominik Mach\u00e1\u010dek, Sangeet Sagar, Otakar Smr\u017e, Jon\u00e1\u0161 Kratochv\u00edl, Ebrahim Ansari, Dario Franceschini, Chiara Canton, Ivan Simonini, Thai-Son Nguyen, Felix Schneider, Sebastian St\u00fccker, Alex Waibel, Barry Haddow, Rico Sen- nrich, and Philip Williams. 2020. ELITR: European live translator. In Proceedings of the 22nd Annual Conference of the European Association for Ma- chine Translation, pages 463-464, Lisboa, Portugal. European Association for Machine Translation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "KIT lecture translator: Multilingual speech translation with one-shot learning",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Dessloch",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "89--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Dessloch, Thanh-Le Ha, Markus M\u00fcller, Jan Niehues, Thai-Son Nguyen, Ngoc-Quan Pham, Eliz- abeth Salesky, Matthias Sperber, Sebastian St\u00fcker, Thomas Zenkel, and Alexander Waibel. 2018. KIT lecture translator: Multilingual speech translation with one-shot learning. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 89-93, Santa Fe, New Mexico. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sangeet Sagar, Dominik Mach\u00e1\u010dek, and Otakar Smr\u017e. 2020. Removing European language barriers with innovative machine translation technology",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Franceschini",
"suffix": ""
},
{
"first": "Chiara",
"middle": [],
"last": "Canton",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Simonini",
"suffix": ""
},
{
"first": "Armin",
"middle": [],
"last": "Schweinfurth",
"suffix": ""
},
{
"first": "Adelheid",
"middle": [],
"last": "Glott",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Thai-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 1st International Workshop on Language Technology Platforms",
"volume": "",
"issue": "",
"pages": "44--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Franceschini, Chiara Canton, Ivan Simonini, Armin Schweinfurth, Adelheid Glott, Sebastian St\u00fcker, Thai-Son Nguyen, Felix Schneider, Thanh- Le Ha, Alex Waibel, Barry Haddow, Philip Williams, Rico Sennrich, Ond\u0159ej Bojar, Sangeet Sagar, Dominik Mach\u00e1\u010dek, and Otakar Smr\u017e. 2020. Removing European language barriers with innova- tive machine translation technology. In Proceed- ings of the 1st International Workshop on Lan- guage Technology Platforms, pages 44-49, Mar- seille, France. European Language Resources Asso- ciation.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simultaneous translation of lectures and speeches",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "F\u00fcgen",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "Muntsin",
"middle": [],
"last": "Kolss",
"suffix": ""
}
],
"year": 2008,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian F\u00fcgen, Alex Waibel, and Muntsin Kolss. 2008. Simultaneous translation of lectures and speeches. Springer Netherlands, Machine Transla- tion, MTSN 2008, Springer, Netherland, 21(4).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation",
"authors": [
{
"first": "Alvin",
"middle": [],
"last": "Grissom",
"suffix": ""
},
{
"first": "I",
"middle": [
"I"
],
"last": "",
"suffix": ""
},
{
"first": "He",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1342--1352",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1140"
]
},
"num": null,
"urls": [],
"raw_text": "Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum\u00e9 III. 2014. Don't until the final verb wait: Reinforcement learning for si- multaneous machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1342- 1352, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to translate in real-time with neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "1053--1062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Vic- tor O.K. Li. 2017. Learning to translate in real-time with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1053-1062, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Large corpus of czech parliament plenary hearings",
"authors": [
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Kratochv\u00edl",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Pol\u00e1k",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jon\u00e1\u0161 Kratochv\u00edl, Peter Pol\u00e1k, and Ond\u0159ej Bojar. 2019. Large corpus of czech parliament plenary hearings.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework",
"authors": [
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hairong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3025--3036",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1289"
]
},
"num": null,
"urls": [],
"raw_text": "Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, and Haifeng Wang. 2019. STACL: Simultaneous trans- lation with implicit anticipation and controllable la- tency using prefix-to-prefix framework. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3025-3036, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Presenting simultaenous translation in limited space",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "On\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 20th Conference ITAT 2020: Workshop on Automata, Formal and Natural Languages",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominik Mach\u00e1\u010dek and On\u0159ej Bojar. 2020. Presenting simultaenous translation in limited space. In Pro- ceedings of the 20th Conference ITAT 2020: Work- shop on Automata, Formal and Natural Languages (WAFNL 2020). To be published.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "ELITR non-native speech translation at IWSLT 2020",
"authors": [
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "Jon\u00e1\u0161",
"middle": [],
"last": "Kratochv\u00edl",
"suffix": ""
},
{
"first": "Sangeet",
"middle": [],
"last": "Sagar",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Mat\u00fa\u0161\u017eilinec",
"suffix": ""
},
{
"first": "Thai-Son",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Yuekun",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th International Conference on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "200--208",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwslt-1.25"
]
},
"num": null,
"urls": [],
"raw_text": "Dominik Mach\u00e1\u010dek, Jon\u00e1\u0161 Kratochv\u00edl, Sangeet Sagar, Mat\u00fa\u0161\u017dilinec, Ond\u0159ej Bojar, Thai-Son Nguyen, Fe- lix Schneider, Philip Williams, and Yuekun Yao. 2020. ELITR non-native speech translation at IWSLT 2020. In Proceedings of the 17th Interna- tional Conference on Spoken Language Translation, pages 200-208, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Lecture translatorspeech translation framework for simultaneous lecture translation",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations",
"volume": "",
"issue": "",
"pages": "82--86",
"other_ids": {
"DOI": [
"10.18653/v1/N16-3017"
]
},
"num": null,
"urls": [],
"raw_text": "St\u00fcker, and Alex Waibel. 2016. Lecture translator - speech translation framework for simultaneous lec- ture translation. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Demonstra- tions, pages 82-86, San Diego, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "High performance sequence-to-sequence model for streaming speech recognition",
"authors": [
{
"first": "Thai-Son",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Stueker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10022"
]
},
"num": null,
"urls": [],
"raw_text": "Thai-Son Nguyen, Ngoc-Quan Pham, Sebastian Stueker, and Alex Waibel. 2020. High performance sequence-to-sequence model for streaming speech recognition. arXiv preprint arXiv:2003.10022.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dynamic transcription for low-latency speech translation",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Eunah",
"middle": [],
"last": "Thai Son Nguyen",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kilgour",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "17th Annual Conference of the International Speech Communication Association, INTER-SPEECH 2016",
"volume": "08",
"issue": "",
"pages": "2513--2517",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2016-154"
]
},
"num": null,
"urls": [],
"raw_text": "Jan Niehues, Thai Son Nguyen, Eunah Cho, Thanh-Le Ha, Kevin Kilgour, Markus M\u00fcller, Matthias Sper- ber, Sebastian St\u00fcker, and Alex Waibel. 2016. Dy- namic transcription for low-latency speech transla- tion. In 17th Annual Conference of the Interna- tional Speech Communication Association, INTER- SPEECH 2016, volume 08-12-September-2016 of Proceedings of the Annual Conference of the Inter- national Speech Communication Association. Ed. : N. Morgan, pages 2513-2517. International Speech and Communication Association, Baixas.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Lowlatency neural speech translation",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Niehues, Ngoc-Quan Pham, Thanh-Le Ha, Matthias Sperber, and Alex Waibel. 2018. Low- latency neural speech translation. In Interspeech 2018, Hyderabad, India.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Testing generality in janus: a multilingual speech translation system",
"authors": [
{
"first": "L",
"middle": [],
"last": "Osterholtz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Augustine",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mcnair",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Rogina",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sloboda",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tebelskis",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 1992,
"venue": "IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "1",
"issue": "",
"pages": "209--212",
"other_ids": {
"DOI": [
"10.1109/ICASSP.1992.225935"
]
},
"num": null,
"urls": [],
"raw_text": "L. Osterholtz, C. Augustine, A. McNair, I. Rogina, H. Saito, T. Sloboda, J. Tebelskis, and A. Waibel. 1992. Testing generality in janus: a multi- lingual speech translation system. In [Proceedings] ICASSP-92: 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol- ume 1, pages 209-212 vol.1.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "English-czech systems in wmt19: Documentlevel transformer",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Dominik",
"middle": [],
"last": "Mach\u00e1\u010dek",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Auersperger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "342--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Popel, Dominik Mach\u00e1\u010dek, Michal Auersperger, Ond\u0159ej Bojar, and Pavel Pecina. 2019. English-czech systems in wmt19: Document- level transformer. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 342-348, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "You may not need attention",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Noah A. Smith. 2018. You may not need attention.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bidirectional recurrent neural network with attention mechanism for punctuation restoration",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2016. Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In Interspeech 2016.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 6000-6010. Curran As- sociates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Samsung and University of Edinburgh's System for the IWSLT 2019",
"authors": [
{
"first": "Joanna",
"middle": [],
"last": "Wetesko",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Chochowski",
"suffix": ""
},
{
"first": "Pawel",
"middle": [],
"last": "Przybysz",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2019,
"venue": "IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joanna Wetesko, Marcin Chochowski, Pawel Przy- bysz, Philip Williams, Roman Grundkiewicz, Rico Sennrich, Barry Haddow, Antonio Valerio Miceli Barone, and Alexandra Birch. 2019. Samsung and University of Edinburgh's System for the IWSLT 2019. In IWSLT.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Dutongchuan: Context-aware translation model for simultaneous interpreting",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ruiqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chuanqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "Hea",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun Hea, Hua Wu, and Haifeng Wang. 2019. Dutongchuan: Context-aware translation model for simultaneous interpreting.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Open source toolkit for speech to text translation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Zenkel",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Sperber",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Ngoc-Quan",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2018,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "",
"issue": "11",
"pages": "125--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Zenkel, Matthias Sperber, Jan Niehues, Markus M\u00fcller, Ngoc-Quan Pham, Sebastian St\u00fcker, and Alex Waibel. 2018. Open source toolkit for speech to text translation. The Prague Bulletin of Mathematical Linguistics, NUMBER 11, p. 125-135.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Improving massively multilingual neural machine translation and zero-shot translation",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1628--1639",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.148"
]
},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Philip Williams, Ivan Titov, and Rico Sen- nrich. 2020. Improving massively multilingual neu- ral machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1628- 1639, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Simpler and faster learning of adaptive policies for simultaneous translation",
"authors": [
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1349--1354",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1137"
]
},
"num": null,
"urls": [],
"raw_text": "Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019. Simpler and faster learning of adap- tive policies for simultaneous translation. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1349-1354, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Fluent and low-latency simultaneous speechto-speech translation with self-adaptive training",
"authors": [
{
"first": "Renjie",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Mingbo",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Baigong",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Kaibo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiahong",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3928--3937",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.349"
]
},
"num": null,
"urls": [],
"raw_text": "Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Jiahong Yuan, Kenneth Church, and Liang Huang. 2020. Fluent and low-latency simultaneous speech- to-speech translation with self-adaptive training. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3928-3937, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "An overview of WER, sacreBLEU scores on the ELITR test set domain and the size of gold transcript for reference.",
"num": null
}
}
}
}