ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:28.926462Z"
},
"title": "Punctuation Restoration in Spanish Customer Support Transcripts using Transfer Learning",
"authors": [
{
"first": "Xiliang",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Shayna",
"middle": [],
"last": "Gardiner",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "David",
"middle": [],
"last": "Rossouw",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Tere",
"middle": [],
"last": "Rold\u00e1n",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Simon",
"middle": [],
"last": "Corston-Oliver",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic Speech Recognition (ASR) systems typically produce unpunctuated transcripts that have poor readability. In addition, building a punctuation restoration system is challenging for low-resource languages, especially for domain-specific applications. In this paper, we propose a Spanish punctuation restoration system designed for a real-time customer support transcription service. To address the data sparsity of Spanish transcripts in the customer support domain, we introduce two transferlearning-based strategies: 1) domain adaptation using out-of-domain Spanish text data; 2) crosslingual transfer learning leveraging in-domain English transcript data. Our experiment results show that these strategies improve the accuracy of the Spanish punctuation restoration system.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic Speech Recognition (ASR) systems typically produce unpunctuated transcripts that have poor readability. In addition, building a punctuation restoration system is challenging for low-resource languages, especially for domain-specific applications. In this paper, we propose a Spanish punctuation restoration system designed for a real-time customer support transcription service. To address the data sparsity of Spanish transcripts in the customer support domain, we introduce two transferlearning-based strategies: 1) domain adaptation using out-of-domain Spanish text data; 2) crosslingual transfer learning leveraging in-domain English transcript data. Our experiment results show that these strategies improve the accuracy of the Spanish punctuation restoration system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic Speech Recognition (ASR) systems play an increasingly important role in our daily lives, with a wide range of applications in different domains such as voice assistant, customer support and healthcare. However, ASR systems usually generate an unpunctuated word stream as the output. Unpunctuated speech transcripts are difficult to read and reduce overall comprehension (Jones et al., 2003) . Punctuation restoration is thus an important post-processing task on the output of ASR systems to improve general transcript readability and facilitate human comprehension.",
"cite_spans": [
{
"start": 380,
"end": 400,
"text": "(Jones et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Punctuation restoration for transcripts of Spanish-speaking customer support telephone dialogue is a non-trivial task. First, real-world human conversation transcripts have unique characteristics compared to common written text, e.g., filler words and false starts are common in spoken dialogue. Moreover, further challenges arise when addressing noisy ASR transcripts in a specific domain, as the lexical data distribution can be quite different compared to public Spanish datasets. Examples of Spanish sentences from different sources are shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Written text in Wikipedia: El espa\u00f1ol o castellano es una lengua romance procedente del lat\u00edn hablado, perteneciente a la familia de lenguas indoeuropeas. (Spanish or Castilian is a Romance language derived from spoken Latin, belonging to the Indo-European language family.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Written text in customer support: Mire, quer\u00eda ver si me pod\u00edan ayudar. (Look, I wanted to see if you guys could help me)",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "(Look, I",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Noisy ASR transcript in customer support: Mire, este, es que, que-quer\u00eda ver si me pod\u00edan ayudar. (Look, well, so I, I wanted to see if you could help me)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent advances in transformer-based pretrained models have been proven successful in many NLP tasks across different languages. For Spanish, available pre-trained resources include multilingual models such as multilingual BERT (mBERT) (Devlin et al., 2019) and XLM-RoBERTa (XLM-R) (Conneau et al., 2020) , as well as monolingual models such as BETO (Ca\u00f1ete et al., 2020) . However, large pre-trained models are trained on various written text sources such as Wikipedia and CommonCrawl (Wenzek et al., 2019) , which are very distant from what we are trying to address in noisy ASR transcripts in the customer support domain. While Spanish is not usually considered a low-resource language in many NLP tasks, it is much more challenging to acquire sufficient training data in Spanish for our domain-specific task, since most of the publicly-available Spanish datasets do not come from natural human conversations, and have little coverage in the customer support domain.",
"cite_spans": [
{
"start": 236,
"end": 257,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 282,
"end": 304,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 350,
"end": 371,
"text": "(Ca\u00f1ete et al., 2020)",
"ref_id": "BIBREF1"
},
{
"start": 486,
"end": 507,
"text": "(Wenzek et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addressing the challenge of in-domain data sparsity we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a punctuation restoration system dedicated for Spanish based on pre-trained models, and examine the feasibility of various pre-trained models for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We adopt a domain adaptation approach utilizing out-of-domain Spanish text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We implement a data modification strategy and match in-domain English transcripts with Spanish punctuation usage, and propose a cross-lingual transfer approach using English transcripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "4. We demonstrate that our proposed transfer learning approaches (domain adaptation and cross-lingual transfer) can sufficiently improve the overall performance of Spanish punctuation restoration in our customer support domain, without any model-level modifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Punctuation restoration is the task of inserting appropriate punctuation marks in the appropriate position on the unpunctuated text input. A variety of approaches have been used for punctuation restoration, most of which are built and evaluated on one language: English. The use of classic machine learning models such as n-gram language model (Gravano et al., 2009) and conditional random fields (Lu and Ng, 2010) are common in early studies. More recently, deep neural networks such as Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and transformers (Vaswani et al., 2017) have been adopted in (Tilk and Alum\u00e4e, 2015) and (Courtland et al., 2020) . Punctuation conventions differ between Spanish and English. Namely, in addition to the equivalents of English and Spanish periods, commas, terminating question marks and terminating exclamation marks, we must also account for the inverted question marks (\u00bf) and inverted exclamation marks (\u00a1) used to introduce these respective clauses in Spanish. There has been limited work done in Spanish punctuation restoration and in most cases Spanish is covered as part of the multilingual training. (Li and Lin, 2020) proposed a multilingual LSTM including the support for Spanish. (Gonz\u00e1lez-Docasal et al., 2021) uses a transformerbased model with both lexical and acoustic inputs for Spanish and Basque.",
"cite_spans": [
{
"start": 344,
"end": 366,
"text": "(Gravano et al., 2009)",
"ref_id": "BIBREF8"
},
{
"start": 397,
"end": 414,
"text": "(Lu and Ng, 2010)",
"ref_id": "BIBREF14"
},
{
"start": 570,
"end": 592,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 614,
"end": 637,
"text": "(Tilk and Alum\u00e4e, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 642,
"end": 666,
"text": "(Courtland et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1160,
"end": 1178,
"text": "(Li and Lin, 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Transfer learning has been widely studied and applied in NLP applications for low-resource languages (Alyafeai et al., 2020) . Domain adaptation and cross-lingual learning both fall under the category of transductive transfer learning, where source and target share the same task but labeled data is only available in source (Ruder et al., 2019) . Data selection is among the data-centric methods used in domain adaptation, which aims to select the best matching data for a new domain (Ramponi and Plank, 2020). (Fu et al., 2021) uses data selection to improve English punctuation restoration with out-of-domain datasets. Recent advances in multilingual language models such as mBERT and XLM-R have shown great potential in crosslingual zero-shot learning, wherein a multilingual model can be trained on the target task in a highresource language, and afterwards applied to the unseen target languages by zero-shot learning (Hedderich et al., 2021) . (Wu and Dredze, 2019) and (Pires et al., 2019) demonstrate the effectiveness of mBERT as a zero-shot cross-lingual transfer model in various NLP tasks, such as classification and natural language inference.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Alyafeai et al., 2020)",
"ref_id": null
},
{
"start": 325,
"end": 345,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 512,
"end": 529,
"text": "(Fu et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 924,
"end": 948,
"text": "(Hedderich et al., 2021)",
"ref_id": "BIBREF9"
},
{
"start": 951,
"end": 972,
"text": "(Wu and Dredze, 2019)",
"ref_id": "BIBREF21"
},
{
"start": 977,
"end": 997,
"text": "(Pires et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Pre-trained transformer-based models have been widely adopted for various NLP tasks since the introduction of BERT (Devlin et al., 2019) . Publicly available pre-trained models for Spanish include the multilingual models mBERT and XLM-R and the BERT-like monolingual model BETO. In this work, we evaluate all three pre-trained models in our experiments and compare their performance in both proposed domain adaptation and cross-lingual transfer approaches.",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "Using pre-trained models as a starting point, we formulate the Spanish punctuation restoration problem as a sequence labeling task, where the model predicts one punctuation class for each input word token. Instead of covering all possible Spanish punctuation marks, we only include nine target punctuation classes that are commonly used and important in terms of improving transcript readability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 OPEN_QUESTION: \u00bf should be added at the start of this word token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 CLOSE_QUESTION: ? should be added at the end of this word token. \u2022 FULL_QUESTION: \u00bf and ? should be added at the start and end of this word token respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 OPEN_EXCLAMATION: \u00a1 should be added at the start of this word token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 CLOSE_EXCLAMATION: ! should be added at the end of this word token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 FULL_EXCLAMATION: \u00a1 and ! should be added at the start and end of this word token respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 COMMA: , should be added at the end of this word token. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 PERIOD: . should be added at the end of this word token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "\u2022 NONE: no punctuation should be associated with this word token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "The input to the Spanish punctuation restoration system is a transcribed utterance emitted by the ASR system. The ASR system outputs an utterance if an endpoint (long pause or speaker change) is detected in the audio. The length of a given utterance can vary, each utterance can contain multiple sentences, meaning that there can be multiple terminating punctuation marks -period, question mark and exclamation mark -in a single utterance. The punctuation restoration model structure is illustrated in Figure 1 . We add a token classification layer on top of the pre-trained models. Raw model prediction results are also post-processed by a set of simple heuristics to mitigate the error caused by unmatched predictions for paired punctuation marks. For instance, a predicted OPEN_QUESTION class will be changed to NONE if there is no matched CLOSE_QUESTION prediction in the same utterance. 2",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Description",
"sec_num": "3.1"
},
{
"text": "It is essential to acquire in-domain manual transcripts that come from real customer support scenarios to build a punctuation restoration model that fits the customer support domain. However, only around 5,000 in-domain transcribed Spanish utterances from call recordings could be obtained at this early product development stage. Addition-Spanish out-of-domain (LDC) examples Ah, qu\u00e9 bueno, yo conozco mucho cubano pero m\u00e1s que todo en Filadelfia. (Ah, how good, I know many Cubans but especially in Philadelphia.) Bueno, mira, eh, \u00bfsus pap\u00e1s, cu\u00e1ntos a\u00f1os llevan casados? (Well, look, uhm, your parents, how long have they been married?)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Spanish out-of-domain (OpenSubtitle) examples S\u00e9 que lo que estoy pidi\u00e9ndote es dif\u00edcil. (I know that what I'm asking you is hard.) S\u00ed, da un poco de tristeza. (Yes, it makes you a little bit sad.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Spanish in-domain examples Buenas tardes, \u00bfc\u00f3mo le puedo ayudar? (Good afternoon, how can I help you?) Pues no me funciona y lo he intentado varias veces. (So, it doesn't work and I've tried several times)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "English in-domain examples I don't find this app very helpful, I'm calling to cancel my subscription. Hi, this is Tom, how can I help you today? ally, there are around 200,000 in-domain manually transcribed English utterances from our call center product.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "We supplemented this in-domain Spanish data with the Linguistic Data Consortium (LDC) Fisher Spanish Speech and Fisher Spanish Transcripts corpora (Graff et al., 2010) . These corpora consist of audio files and transcripts for approximately 163 hours of telephone conversations from native Spanish speakers. These recordings are a good match to the acoustic properties of our telephone conversations, but the transcripts, which are mostly social calls with predefined topics, do not match the domain of customer support conversations.",
"cite_spans": [
{
"start": 147,
"end": 167,
"text": "(Graff et al., 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "The Spanish portion of the OpenSubtitle corpus (Lison and Tiedemann, 2016) also contains a variety of human-to-human conversation, albeit from movies rather than from spontaneous conversational speech. Spanish OpenSubtitle offers 179 million sentences from 192,000 subtitle files, and can provide our models with good exposure to exclamation marks, which are not included in the LDC dataset. However, the movie topics are generally distant from our business-specific, customer support domain.",
"cite_spans": [
{
"start": 47,
"end": 74,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Some examples from both in-domain and outof-domain data sources are illustrated in Table 1 . External out-of-domain datasets usually have various Spanish punctuation marks outside our supported range as described in 3.1. After reviewing the datasets from a linguistic perspective, we first apply a set of conversion rules to those unsupported punctuation marks without affecting the readability and semantic meanings: we delete quotation marks, replace colons and semicolons with commas, and replace ellipses with periods.",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Many machine learning applications have the assumption that training and testing datasets follow the same underlying distribution. But for our target task in the customer support domain, we mostly have to rely on external data such as LDC and Spanish OpenSubtitle during the training process, due to the lack of in-domain Spanish data. This will therefore cause a mismatch between our training and testing data in terms of its distribution, and consequently, performance will drop in our target task. Therefore, to mitigate this distribution mismatch, we apply domain adaptation on external Spanish datasets from two directions: data selection and data augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "As described in 3.2, Spanish OpenSubtitle has a total of over 179 million sentences, which is much larger than our other data sources. However, the vast majority of the data in the Spanish OpenSubtile corpus are fundamentally distinct from our target customer support domain, and randomly sampling from out-of-domain datasets could hurt the model performance. Thus, following the procedure in (Fu et al., 2021) , we first train a 4-gram language model using our Spanish in-domain data, and then sample the 100,000 utterances from the OpenSubtitle corpus with lowest perplexity (i.e. the highest language model similarity to the in-domain data). Since the telephone conversation transcripts in the LDC corpora are closer to our target domain and there are only 130,000 utterances in this dataset, we do not perform further data selection on the LDC data for training purposes.",
"cite_spans": [
{
"start": 393,
"end": 410,
"text": "(Fu et al., 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Selection",
"sec_num": "3.3.1"
},
{
"text": "Most of the data in LDC and OpenSubtitle datasets is segmented into single sentences. However, as described in 3.1, the input to our punctuation restoration system will be composed of larger blocks of utterances rather than single sentences. To illustrate this difference, we investigate how many terminating punctuation marks occur in each input from external datasets and in-domain data, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.3.2"
},
{
"text": "As shown in Figure 2 (a)(b)(c), our in-domain data has a much wider distribution in terms of the number of terminating punctuation marks in a single utterance. However, the majority of samples in both LDC and OpenSubtitle consist of only one sentence each. It is necessary to augment the outof-domain datasets to cover the wider spread of distribution exhibited in our in-domain data, based on the fact that this will affect how many terminating punctuation marks the model tends to predict per input utterance. We therefore apply data augmentation by concatenating sentences in these corpora, in proportion to the spread seen in our in-domain dataset, so that the overall terminating punctuation distribution in out-of-domain datasets matches our in-domain data. As Figure 2 augmented results for the LDC and OpenSubtitle corpora more closely match the distribution of our in-domain Spanish data.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 767,
"end": 775,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.3.2"
},
{
"text": "Multilingual language models such as mBERT and XLM-R advanced zero-shot cross-lingual transfer learning for low-resource languages (Hedderich et al., 2021) . Instead of using cross-lingual transfer as zero-shot, we utilize our English in-domain data (described in 3.2) to fine-tune multilingual pretrained models in addition to our available Spanish datasets to improve our Spanish punctuation restoration system. However, punctuation conventions differ between languages; to better leverage cross-lingual transfer learning, we first convert the punctuation usage in the source language to appropriately match the punctuation conventions in the target language. Since this study involves matching English punctuation to Spanish, the task is not insurmountable: most of the punctuation marks and their usages are the same across these two languages. Periods are used to terminate a declarative sentence in both languages, and the usage of commas to separate words or phrases is very similar. Therefore, no modifications are required for these two punctuation marks.",
"cite_spans": [
{
"start": 131,
"end": 155,
"text": "(Hedderich et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "One more significant challenge for this task is the fact that question marks and exclamation marks do work somewhat differently in Spanish writing than in English. Namely, in addition to the terminating role played in both languages by standard question marks (to denote the end of an interrogative sentence) and standard exclamation marks (to denote the end of an exclamatory sentence), Spanish writing conventions also require the addition of an inverted question mark or an inverted exclamation mark, which occur at the beginning of the clause that contains the question or exclamation. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "\u2022 English: Hi, how are you today?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "\u2022 Spanish: Hola, \u00bfc\u00f3mo est\u00e1s hoy?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "For each question mark and exclamation mark in our English training data, we add an open question mark or exclamation mark, respectively, at the start of the word chunk that the terminating question or exclamation mark is in. For example, consider the following English utterance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "\"OK, how can I help you?\" For cross-lingual transfer training, it will be modified to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "\"OK, \u00bfhow can I help you?\" By doing this conversion, the model will learn to predict punctuation as it should occur in Spanish contexts during the fine-tuning phase, even though what it actually sees are English utterances with Spanish punctuation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "To determine the best way to transfer the indomain distribution from English (EN) to Spanish (ES) in the punctuation restoration task, we investigate three fine-tuning strategies for cross-lingual transfer learning:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "1. Fine-tune the pre-trained models in two steps, Spanish first then English. Noted as \"ES->EN\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "2. Fine-tune the pre-trained models in two steps, English first then Spanish. Noted as \"EN->ES\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "3. Fine-tune the pre-trained models in one step, with joint English and Spanish data. Noted as \"Joint EN, ES\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "Diagrams of three fine-tuning strategies are illustrated in Figure 3 . Note that our objective is to build a model for Spanish, but it is still worth experimenting with \"ES->EN\" setting to establish the impact of more in-domain data albeit in a different language.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Cross-lingual Transfer",
"sec_num": "3.4"
},
{
"text": "We evaluate our proposed transfer learning approaches using the datasets described in 3.2. Using the model architecture shown in Figure 1 , we fine-tune pre-trained models using various data combinations and fine-tuning strategies to demonstrate the effectiveness of our proposed approaches. Pre-trained models including both monolingual (BETO) and multilingual (MBERT and XLM-R) are explored and evaluated.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "4.1"
},
{
"text": "The Spanish punctuation restoration system is intended to operate in real-time so that customersupport agents can review prior information communicated by a customer and to provide the input to product features such as automatically retrieving information to assist the agent. As shown in (Fu et al., 2021) , reducing the number of layers from deep pre-trained models does not significantly impact accuracy for the punctuation restoration task. To reduce the computation time during inference, we take only the first six layers from the pre-trained models as our starting point.",
"cite_spans": [
{
"start": 289,
"end": 306,
"text": "(Fu et al., 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "4.1"
},
{
"text": "To evaluate the model accuracy in our target customer support domain, we split our in-domain Spanish manual transcripts into three parts: the training set (60%), the validation set (10%) and the test set (30%). The Spanish in-domain training set is over-sampled to make the size comparable to the other datasets. The performance of every model is evaluated on the in-domain test set after being finetuned on various combinations of training sources and processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Setup",
"sec_num": "4.1"
},
{
"text": "We evaluate the F1 score performance before and after the domain adaptation approaches proposed in 3.3. Pre-trained models are fine-tuned using the combinations of LDC and selected OpenSubtitle datasets only, and then evaluated on our in-domain test set. The results are shown in Table 2 . Both data selection and data augmentation improve the overall F1 score performance for all three pre-trained models, which demonstrates the effectiveness of our domain adaptation approaches for the Spanish punctuation restoration task. Among three different models, XLM-R shows the best performance under this setup, and outperforms the monolingual BETO model after domain adaptation. ",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 287,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Performance with Domain Adaptation",
"sec_num": "4.2"
},
{
"text": "To understand the effect of cross-lingual transfer, we use all the available data sources described in 3.2. We separate the Spanish datasets (LDC, selected OpenSubtitle and Spanish in-domain transcripts) from the English one (English in-domain transcripts), and fine-tune the pre-trained models using three different strategies described in 3.4 (\"ES->EN\", \"EN->ES\" and \"Joint EN, ES\") as shown in Figure 3 . Table 3 shows our results on cross-lingual transfer learning: multilingual models (mBERT and XLM-R) both show performance gain with \"Joint EN, ES\" and \"EN->ES\" training. However, \"ES->EN\" training actually results in lower accuracy than models trained without cross-lingual transfer. As for the comparison with the monolingual model (BETO) which is not feasible for the direct cross-lingual transfer, XLM-R produces similar results as BETO without cross-lingual transfer, but XLM-R outperforms BETO by 1.5% F1 score after joint training with both Spanish and English datasets. mBERT becomes comparable to BETO after cross-lingual transfer as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 408,
"end": 415,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Performance with Cross-lingual transfer",
"sec_num": "4.3"
},
{
"text": "When analysing the prediction errors, we found that many CLOSE_QUESTION classes are predicted as PERIOD by the model, as shown in Table 4 . This is a common behavior across all three pre-trained models, and is possibly due to the linguistic properties of Spanish. Because Spanish clauses do not require an overt subject noun phrase, and because Spanish has considerable variability in constituent order, it is often the case that there is no structural indication of whether an utterance should be interpreted as a declarative or as a question. Instead, intonation is used to make this distinction. For example, \"hablan espa\u00f1ol\" (\"they speak Spanish\" or \"do they speak Spanish\") becomes a question with rising intonation. Future work in this area might focus on incorporating such acoustic information into punctuation restoration tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "For this study, we trained and tested a Spanish punctuation restoration system for the customer support domain based on pre-trained transformer models. To address in-domain data sparsity in Spanish, transfer learning approaches were applied in two directions: domain adaptation and cross-lingual transfer. We explored and fine-tuned three different pre-trained models with our transfer learning approaches for this task; our results demonstrate that the domain adaptation method improves the accuracy of all three pre-trained models. Cross-lingual transfer with joint training of English and Spanish datasets improves the performance of both multilingual pre-trained models. XLM-R substantially outperforms the monolingual BETO after crosslingual transfer and achieves the best F1 score in our Spanish punctuation restoration task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The insertion of commas as decimal separators is not included here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This post-processing step may not always produce the correct result, but the overall prediction accuracy was improved by adding this post-processing in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Maged Saeed AlShaibani, and Irfan Ahmad. 2020. A survey on transfer learning in natural language processing",
"authors": [
{
"first": "Zaid",
"middle": [],
"last": "Alyafeai",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zaid Alyafeai, Maged Saeed AlShaibani, and Irfan Ah- mad. 2020. A survey on transfer learning in natural language processing. CoRR, abs/2007.04239.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ca\u00f1ete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jou-Hui",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Hojin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Ca\u00f1ete, Gabriel Chaperon, Rodrigo Fuentes, Jou- Hui Ho, Hojin Kang, and Jorge P\u00e9rez. 2020. Span- ish pre-trained bert model and evaluation data. In PML4DC at ICLR 2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient automatic punctuation restoration using bidirectional transformers with robust inference",
"authors": [
{
"first": "Maury",
"middle": [],
"last": "Courtland",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Faulkner",
"suffix": ""
},
{
"first": "Gayle",
"middle": [],
"last": "Mcelvain",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th International Conference on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "272--279",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwslt-1.33"
]
},
"num": null,
"urls": [],
"raw_text": "Maury Courtland, Adam Faulkner, and Gayle McElvain. 2020. Efficient automatic punctuation restoration us- ing bidirectional transformers with robust inference. In Proceedings of the 17th International Conference on Spoken Language Translation, pages 272-279, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improving punctuation restoration for speech transcripts via external data",
"authors": [
{
"first": "Xue-Yong",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Md",
"middle": [],
"last": "Tahmid Rahman Laskar",
"suffix": ""
},
{
"first": "Shashi",
"middle": [],
"last": "Bhushan",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Corston-Oliver",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)",
"volume": "",
"issue": "",
"pages": "168--174",
"other_ids": {
"DOI": [
"10.18653/v1/2021.wnut-1.19"
]
},
"num": null,
"urls": [],
"raw_text": "Xue-Yong Fu, Cheng Chen, Md Tahmid Rahman Laskar, Shashi Bhushan, and Simon Corston-Oliver. 2021. Improving punctuation restoration for speech tran- scripts via external data. In Proceedings of the Sev- enth Workshop on Noisy User-generated Text (W- NUT 2021), pages 168-174, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Autopunct: A bert-based automatic punctuation and capitalisation system for spanish and basque",
"authors": [
{
"first": "Ander",
"middle": [],
"last": "Gonz\u00e1lez-Docasal",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Garc\u00eda-Pablos",
"suffix": ""
},
{
"first": "Haritz",
"middle": [],
"last": "Arzelus",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "\u00c1lvarez",
"suffix": ""
}
],
"year": 2021,
"venue": "Procesamiento del Lenguaje Natural",
"volume": "67",
"issue": "",
"pages": "59--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ander Gonz\u00e1lez-Docasal, Aitor Garc\u00eda-Pablos, Haritz Arzelus, and Aitor \u00c1lvarez. 2021. Autopunct: A bert-based automatic punctuation and capitalisation system for spanish and basque. Procesamiento del Lenguaje Natural, 67(0):59-68.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fisher spanishtranscripts ldc2010t04",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Shudong",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ingrid",
"middle": [],
"last": "Cartagena",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Cieri",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.35111/s30q-sn19"
]
},
"num": null,
"urls": [],
"raw_text": "David Graff, Shudong Huang, Ingrid Cartagena, Kevin Walker, and Christopher Cieri. 2010. Fisher spanish - transcripts ldc2010t04.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Restoring punctuation and capitalization in transcribed speech",
"authors": [
{
"first": "Agustin",
"middle": [],
"last": "Gravano",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
},
{
"first": "Michiel",
"middle": [],
"last": "Bacchiani",
"suffix": ""
}
],
"year": 2009,
"venue": "2009 IEEE International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "4741--4744",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2009.4960690"
]
},
"num": null,
"urls": [],
"raw_text": "Agustin Gravano, Martin Jansche, and Michiel Bacchi- ani. 2009. Restoring punctuation and capitalization in transcribed speech. In 2009 IEEE International Conference on Acoustics, Speech and Signal Process- ing, pages 4741-4744.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A survey on recent approaches for natural language processing in low-resource scenarios",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Hedderich",
"suffix": ""
},
{
"first": "Lukas",
"middle": [],
"last": "Lange",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Jannik",
"middle": [],
"last": "Str\u00f6tgen",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Klakow",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2545--2568",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.201"
]
},
"num": null,
"urls": [],
"raw_text": "Michael A. Hedderich, Lukas Lange, Heike Adel, Jan- nik Str\u00f6tgen, and Dietrich Klakow. 2021. A survey on recent approaches for natural language process- ing in low-resource scenarios. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2545-2568, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Measuring the readability of automatic speech-to-text transcripts",
"authors": [
{
"first": "Douglas",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Elliott",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Evelina",
"middle": [],
"last": "Fedorenko",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Zissman",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Douglas Jones, Florian Wolf, Edward Gibson, Elliott Williams, Evelina Fedorenko, Douglas Reynolds, and Marc Zissman. 2003. Measuring the readability of automatic speech-to-text transcripts.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A 43 Language Multilingual Punctuation Prediction Neural Network Model",
"authors": [
{
"first": "Xinxing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. Interspeech 2020",
"volume": "",
"issue": "",
"pages": "1067--1071",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2020-2052"
]
},
"num": null,
"urls": [],
"raw_text": "Xinxing Li and Edward Lin. 2020. A 43 Language Multilingual Punctuation Prediction Neural Network Model. In Proc. Interspeech 2020, pages 1067-1071.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 923-929, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Better punctuation prediction with dynamic conditional random fields",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "177--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Lu and Hwee Tou Ng. 2010. Better punctuation prediction with dynamic conditional random fields. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 177- 186, Cambridge, MA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1493"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural unsupervised domain adaptation in NLP-A survey",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ramponi",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6838--6855",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.603"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Ramponi and Barbara Plank. 2020. Neural unsu- pervised domain adaptation in NLP-A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6838-6855, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Unsupervised cross-lingual representation learning",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of ACL 2019, Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "31--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Anders S\u00f8gaard, and Ivan Vuli\u0107. 2019. Unsupervised cross-lingual representation learning. In Proceedings of ACL 2019, Tutorial Abstracts, pages 31-38.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "LSTM for punctuation restoration in speech transcripts",
"authors": [
{
"first": "Ottokar",
"middle": [],
"last": "Tilk",
"suffix": ""
},
{
"first": "Tanel",
"middle": [],
"last": "Alum\u00e4e",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "683--687",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2015-240"
]
},
"num": null,
"urls": [],
"raw_text": "Ottokar Tilk and Tanel Alum\u00e4e. 2015. LSTM for punc- tuation restoration in speech transcripts. In Proc. Interspeech 2015, pages 683-687.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "CCNet: Extracting high quality monolingual datasets from web crawl data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Marie-Anne",
"middle": [],
"last": "Lachaux",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm\u00e1n, Ar- mand Joulin, and Edouard Grave. 2019. CCNet: Ex- tracting high quality monolingual datasets from web crawl data. CoRR.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "833--844",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1077"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Our punctuation restoration system, showing the process of predicting \"en qu\u00e9 le puedo ayudar\" as \"\u00bfEn qu\u00e9 le puedo ayudar?\" (How can I help you?).",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Comparison of number of terminating punctuations per utterance distribution in in-domain, LDC and OpenSubtitle datasets, before and after data augmentation.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "(d)(e) shows, the Diagram of three proposed fine-tuning strategies. (a) ES->EN, (b) EN->ES, (c) Joint EN, ES",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Examples of Spanish and English utterances.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "F1 score performance comparison using the LDC and OpenSubtitle datasets, before and after our domain adaptation approaches.",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "F1 score performance comparison with and without cross-lingual transfer. ES: the combination of Spanish datasets including (1) Augmented (LDC + Selected OpenSubtitle) as described inTable 2;(2) Spanish in-domain transcripts. EN: English in-domain transcripts.",
"content": "<table><tr><td>Gold</td><td colspan=\"3\">Prediction CLOSE_QUESTION PERIOD</td></tr><tr><td colspan=\"2\">CLOSE_QUESTION PERIOD</td><td>223 37</td><td>106 2177</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "Confusion matrix of CLOSE_QUESTION and PERIOD on test set, using best performing XLM-R in 4.3",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}