ACL-OCL / Base_JSON /prefixF /json /findings /2020.findings-emnlp.130.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:47:01.817940Z"
},
"title": "Balancing via Generation for Multi-Class Text Classification Improvement",
"authors": [
{
"first": "Naama",
"middle": [],
"last": "Tepper",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Esther",
"middle": [],
"last": "Goldbraich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Naama",
"middle": [],
"last": "Zwerdling",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "George",
"middle": [],
"last": "Kour",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Ateret",
"middle": [],
"last": "Anaby-Tavor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Boaz",
"middle": [],
"last": "Carmeli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Data balancing is a known technique for improving the performance of classification tasks. In this work we define a novel balancing-viageneration framework termed BalaGen. Bala-Gen consists of a flexible balancing policy coupled with a text generation mechanism. Combined, these two techniques can be used to augment a dataset for more balanced distribution. We evaluate BalaGen on three publicly available semantic utterance classification (SUC) datasets. One of these is a new COVID-19 Q&A dataset published here for the first time. Our work demonstrates that optimal balancing policies can significantly improve classifier performance, while augmenting just part of the classes and under-sampling others. Furthermore, capitalizing on the advantages of balancing, we show its usefulness in all relevant BalaGen framework components. We validate the superiority of BalaGen on ten semantic utterance datasets taken from real-life goaloriented dialogue systems. Based on our results we encourage using data balancing prior to training for text classification tasks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Data balancing is a known technique for improving the performance of classification tasks. In this work we define a novel balancing-viageneration framework termed BalaGen. Bala-Gen consists of a flexible balancing policy coupled with a text generation mechanism. Combined, these two techniques can be used to augment a dataset for more balanced distribution. We evaluate BalaGen on three publicly available semantic utterance classification (SUC) datasets. One of these is a new COVID-19 Q&A dataset published here for the first time. Our work demonstrates that optimal balancing policies can significantly improve classifier performance, while augmenting just part of the classes and under-sampling others. Furthermore, capitalizing on the advantages of balancing, we show its usefulness in all relevant BalaGen framework components. We validate the superiority of BalaGen on ten semantic utterance datasets taken from real-life goaloriented dialogue systems. Based on our results we encourage using data balancing prior to training for text classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Imbalanced datasets pose a known difficulty in achieving ultimate classification performance as classifiers tend to be biassed towards larger classes (Guo et al., 2008; Japkowicz and Stephen, 2002; Japkowicz, 2000) . Moreover, identifying samples that belong to under-represented classes is of high importance in many real-life domains such as fraud detection, disease diagnosis, and cyber security.",
"cite_spans": [
{
"start": 150,
"end": 168,
"text": "(Guo et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 169,
"end": 197,
"text": "Japkowicz and Stephen, 2002;",
"ref_id": "BIBREF20"
},
{
"start": 198,
"end": 214,
"text": "Japkowicz, 2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although the imbalanced data classification problem is well-defined, and has been researched extensively over the last two decades (Estabrooks et al., 2004; Batista et al., 2004; Ramyachitra and Manikandan, 2014; Zhu et al., 2017 ; Buda et al., * Equal contribution 2018), there has been considerably less work devoted to balancing textual datasets.",
"cite_spans": [
{
"start": 131,
"end": 156,
"text": "(Estabrooks et al., 2004;",
"ref_id": "BIBREF11"
},
{
"start": 157,
"end": 178,
"text": "Batista et al., 2004;",
"ref_id": "BIBREF3"
},
{
"start": 179,
"end": 212,
"text": "Ramyachitra and Manikandan, 2014;",
"ref_id": "BIBREF30"
},
{
"start": 213,
"end": 229,
"text": "Zhu et al., 2017",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a novel balancing-via-generation framework, termed BalaGen, to improves textual classification performance. BalaGen uses a balancing policy to identify over-and under-represented classes. It then uses controlled text generation, coupled with a weak labeling mechanism to augment the under-represented classes. Additionally, it applies under-sampling to decrease the overrepresented classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our analysis is focused on semantic utterance classification (SUC) (Tur et al., 2012; Tur and Deng, 2011; Schuurmans and Frasincar, 2019) . SUC is a fundamental, multi-class, highly imbalanced textual classification problem. For example, it is widely used for intent (class) detection in goaloriented dialogue systems (Henderson et al., 2014; Bohus and Rudnicky, 2009) , and for frequently asked question (FAQ) retrieval (Sakata et al., 2019; Gupta and Carvalho, 2019; Wang et al., 2017) .",
"cite_spans": [
{
"start": 67,
"end": 85,
"text": "(Tur et al., 2012;",
"ref_id": "BIBREF42"
},
{
"start": 86,
"end": 105,
"text": "Tur and Deng, 2011;",
"ref_id": "BIBREF41"
},
{
"start": 106,
"end": 137,
"text": "Schuurmans and Frasincar, 2019)",
"ref_id": null
},
{
"start": 318,
"end": 342,
"text": "(Henderson et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 343,
"end": 368,
"text": "Bohus and Rudnicky, 2009)",
"ref_id": "BIBREF4"
},
{
"start": 421,
"end": 442,
"text": "(Sakata et al., 2019;",
"ref_id": "BIBREF34"
},
{
"start": 443,
"end": 468,
"text": "Gupta and Carvalho, 2019;",
"ref_id": "BIBREF16"
},
{
"start": 469,
"end": 487,
"text": "Wang et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Correctly identifying scarce utterances is of great importance in many real life scenarios. For example, consider a scenario in which a user converses with the dialogue system in an online shop (Yan et al., 2017) . For the store owner, the task of correctly identifying the buying-intent utterances is paramount. However, the number of utterances related to searching for products is expected to be significantly higher, thus biasing the classifier toward this intent.",
"cite_spans": [
{
"start": 194,
"end": 212,
"text": "(Yan et al., 2017)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We analyzed BalaGen's capabilities on two publicly available SUC datasets. In addition, we introduce a new dataset called COVID-19 Q&A (CQA), which contains answers to questions frequently asked by the public during the pandemic period. Analysis of this new dataset further demonstrates improved performance using our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is thus four-fold: i) We present BalaGen, a balancing-via-generation framework for optimizing classification performance on imbalanced multi-class textual datasets. (ii) We analyze different factors that affect BalaGen's performance, including quality of generated textual data, weak supervision mechanisms, and balancing of Bala-Gen's internal components. iii) We validate our approach on 3 publicly available datasets and a collection of 10 SUC datasets used to train real-life goal-oriented dialogue systems. iv) We contribute a new COVID-19 related SUC dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In imbalanced classification, also known as the \"Class Imbalance Problem\", classifiers tend to bias towards larger classes (Provost, 2000) . This challenge, has garnered extensive research over the past decades (Estabrooks et al., 2004; Chawla et al., 2004; Sahare and Gupta, 2012) . The range of approaches to solve this issue depends on the type of data and the target classifier (Zheng et al., 2004; Wang and Yao, 2009; . Ramyachitra and Manikandan (2014) divide classification improvements over imbalanced datasets into five levels: data, algorithmic, cost sensitive, feature selection and ensemble. We focus our review on the data level and specifically on textual dataset balancing.",
"cite_spans": [
{
"start": 123,
"end": 138,
"text": "(Provost, 2000)",
"ref_id": "BIBREF27"
},
{
"start": 211,
"end": 236,
"text": "(Estabrooks et al., 2004;",
"ref_id": "BIBREF11"
},
{
"start": 237,
"end": 257,
"text": "Chawla et al., 2004;",
"ref_id": "BIBREF8"
},
{
"start": 258,
"end": 281,
"text": "Sahare and Gupta, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 382,
"end": 402,
"text": "(Zheng et al., 2004;",
"ref_id": "BIBREF52"
},
{
"start": 403,
"end": 422,
"text": "Wang and Yao, 2009;",
"ref_id": "BIBREF44"
},
{
"start": 425,
"end": 458,
"text": "Ramyachitra and Manikandan (2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Primary data-level methods vary the number of samples in the dataset via re-sampling. We follow the common terminology and refer to a method that adds samples to a dataset, as over-sampling, and to a method that removes samples as undersampling. sample-copy, i.e. duplicating existing samples, is the most straightforward over-sampling method and random-selection is the most straightforward under-sampling method. While these methods were shown to be effective to some extent for data balancing, they are insufficient when it comes to solving the problem (Branco et al., 2016) .",
"cite_spans": [
{
"start": 556,
"end": 577,
"text": "(Branco et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Traditional and well researched feature-based over-sampling techniques generate new samples via feature manipulation (Wong et al., 2016) . Most of these techniques are based on the Synthetic Minority Oversampling TEchnique (SMOTE) (Chawla et al., 2002) or the ADAptive SYNthetic (ADASYN) approach (He et al., 2008) . These approaches create synthetic samples by manipulating the feature values of existing samples. However, the latest deep learning (DL) models do not have an explainable features layer to manipulate. Although the embedding layer may be perceived as the DL analogy to the traditional feature layer, this layer is of high dimension and is not easy to interpret and manipulate while preserving the original class label. Thus, local changes to the embedding values of textual datasets does not yield the expected results.",
"cite_spans": [
{
"start": 117,
"end": 136,
"text": "(Wong et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 231,
"end": 252,
"text": "(Chawla et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 297,
"end": 314,
"text": "(He et al., 2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In contrast to feature-based over-sampling techniques, data augmentation generates additional samples through transformations applied directly to the data. For example, Easy Data Augmentation (EDA) (Wei and Zou, 2019) is a na\u00efve yet effective text augmentation technique based on synonym replacement using Wordnet (Fellbaum, 2012) , random insertion, random swap, and random deletion of words. Language model-based Markov Chain (MC) (Barbieri et al., 2012) is another example of a word level second-order model that was shown to improve textual data-balancing (Akkaradamrongrat et al., 2019) . Additional research works includes structure preserving word replacement using a Language Model (Kobayashi, 2018) , recurrent neural language generation for augmentation (Rizos et al., 2019) , and various parapharasing methods as done in (Gupta et al., 2017) .",
"cite_spans": [
{
"start": 314,
"end": 330,
"text": "(Fellbaum, 2012)",
"ref_id": "BIBREF12"
},
{
"start": 433,
"end": 456,
"text": "(Barbieri et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 560,
"end": 591,
"text": "(Akkaradamrongrat et al., 2019)",
"ref_id": null
},
{
"start": 690,
"end": 707,
"text": "(Kobayashi, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 764,
"end": 784,
"text": "(Rizos et al., 2019)",
"ref_id": "BIBREF32"
},
{
"start": 832,
"end": 852,
"text": "(Gupta et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, transformer-based pre-trained architectures (Vaswani et al., 2017) have been developed and successfully applied to a wide set of Natural Language Generation (NLG), processing and understanding tasks. Examples of these include Generative Pre-trained (GPT) (Radford et al., 2019) , which is a right-to-left language model based on the transformer's decoder architecture (Vaswani et al., 2017) , BERT (Devlin et al., 2018) , BART (Lewis et al., 2019) and T5 (Raffel et al., 2019) . These attention-based architectures are capable of generating human-level high-quality text, making them a compelling choice for textual data augmentations. Specifically, CBERT improves EDA by using BERT synonym prediction. Additional advanced transformer-based methods control the generation process by providing an existing sample, designated class label, or both. These methods were shown to be beneficial for data augmentation (Anaby-Tavor et al., 2019; Kumar et al., 2020) . However, these methods suffer from several drawbacks: first, they were only shown to be successful on small sized datasets (five samples per class or 1% of the dataset). Second, the augmentation process was shown to be error prone as the generated samples do not always preserve the class label of the original data. Third, as we show in this work, na\u00efvely using these methods to generate a constant number of samples for each class in the dataset, as done in previous work, does not realize their full potential for improving textual classification tasks.",
"cite_spans": [
{
"start": 54,
"end": 76,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 265,
"end": 287,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 378,
"end": 400,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 408,
"end": 429,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 437,
"end": 457,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 465,
"end": 486,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF29"
},
{
"start": 920,
"end": 946,
"text": "(Anaby-Tavor et al., 2019;",
"ref_id": null
},
{
"start": 947,
"end": 966,
"text": "Kumar et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other approaches for data balancing can include weak-labeling of available unlabeled data (Ratner et al., 2020) , or even active learning (Settles, 2009) . However, both of these approaches require additional domain data which is not always available.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Ratner et al., 2020)",
"ref_id": "BIBREF31"
},
{
"start": 138,
"end": 153,
"text": "(Settles, 2009)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Notably, some approaches aim at assuring interpretability of generated samples (Santos et al., 2017) . However, BalaGen takes a different aproach -aiming to improve performance without consideration of textual validity/interpretability of generated sentences as done in (Rizos et al., 2019) . Thus, only class perseverance and ability to contribute to accuracy are considered.",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "(Santos et al., 2017)",
"ref_id": "BIBREF35"
},
{
"start": 270,
"end": 290,
"text": "(Rizos et al., 2019)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To the best of our knowledge, this is the first work to explore the use of transformer-based augmentation techniques directly towards data balancing to improve textual classification tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "At the cornerstone of our methodology lie the recent controlled text generation methods, capable of synthesizing high quality samples (Kumar et al., 2020; Anaby-Tavor et al., 2019) . We tested the hypothesis whereby enhancing these generation methods with a new balancing technique, which differentially add and remove samples from classes, can result in a significant improvement to classifier accuracy.",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Kumar et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 155,
"end": 180,
"text": "Anaby-Tavor et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "To overcome the well-known drawback of oversampling via text generation, i.e., class label preservation is not guaranteed (Kumar et al., 2020) , we employed a weak labeling mechanism which is used to select generated samples that have a high probability of preserving their class label. We further refer to weak labelers simply as labelers.",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In the rest of this section, we describe the steps of our BalaGen approach. We refer to the step numbers according to the enumeration in the pseudocode given in Algorithm 1 and the schematic flow diagram shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Balancing policy: A balancing policy \u03c0(\u2022), generally, aims to reach a specific distribution of the samples among the classes, by adding and removing samples. In step (1) we use policies that determine a band [B low , B high ], which within the set of classes are considered Well-Represented (W R).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Consequently, the set of classes smaller than B low are referred to as Under-Represented (U R) and should be further over-sampled, e.g., via augmentation. Classes larger than B high are considered Over-Represented (OR) and will be under-sampled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In the following, let c i be the index of i th class after sorting the classes by their size (i.e., the number of samples) in an ascending order. Given that n is the number of classes, |c n | is the size of the largest class. In Figure 2 we describe several types of balancing policies supported by BalaGen.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "While there may be many approaches to determine the W R band, here we employ the following percentile approach: Given the parameters \u03b2 low and \u03b2 high , we set B low such that \u03b2 low % of the classes belong to the U R set and set B high such that \u03b2 high % of the classes belong to the OR set. Note that \u03b2 low + \u03b2 high \u2264 100.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Input :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "Training dataset D Weak labeling models L 1 , ..., L k (Pre-trained) language model G Balancing policy \u03c0(\u2022) Over-sampling method OS(\u2022, \u2022) Under-sampling method US(\u2022, \u2022) 1 [B low , B high ] \u2190 \u03c0(D) 2 D S \u2190 OS(US(D, B high ), B low )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "3 Fine-tune G using D S to obtain G tuned and synthesize a set of labeled samples for the under-represented classes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "D * using G tuned 4 h 1 \u2190 L 1 (D S ), ..., h k \u2190 L k (D S ) 5 Select best samples in D * using weak labelers h 1 , .., h k to obtain D syn 6 D Balanced \u2190 U(D syn \u222a D, B high ) 7 return D Balanced",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "Balancing the train set of the generator and weak-labelers: In step (2) we compose a balanced dataset D S used to train the generator and the labeler(s). The under-sampling method is executed on the OR classes targeting the B high threshold, while the oversampling method is executed on the U R classes targeting the B low threshold. This step aims to reduce class biases of the generator and labelers. Formally, OS and US denote over and under sampling functions, respectively. Each accept two parameters: a dataset D to perform on and the threshold B. Sample generation: In step (3) we first fine-tune (or train if its not a pre-trained model) the language model G on D S to obtain G tuned . Then, G tuned is used to generate D * . If a right-to-left pre-trained language model is used, such as GPT-2, the finetuning procedure follows the method proposed in (Anaby-Tavor et al., 2019); there, the class label is prepended to each sample during training. Then, conditioned on the class label, the fine-tuned model is used to generate samples for the U R classes, denoted as D * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "Weak labeling: In step (4) we train the labeler(s) L 1 , ..., L k on D s and then label the generated samples in D * . The weak labeling step is required as an additional quality assurance mechanism, since neither the quality of a generated sample nor the accuracy of its label can be guaranteed during the generation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "Sample selection: In step (5), a set of generated samples is selected, according to labels assigned by the labelers and added to each class up to the B low threshold. The resulting dataset is denoted D syn .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "Augmenting We present a new dataset called COVID-19 Q&A, and referred to as CQA (https://developer.ibm.com/exchanges/data/all/cqa/). The CQA dataset contains questions which were frequently asked by the public during the COVID-19 pandemic period. The questions were categorised according to user intents. The dataset was created to ramp-up a dialogue system that provides answers to questions frequently asked by the public. The data was collected by creating an initial classifier for a question answering dialogue system, which was further extended by selecting samples from its logs of user interactions and then labeling them. in Table 2 . The CQA dataset is moderately imbalanced and characterized by a balance-ratio of 1:76 (ratio between the size of biggest class to the size of the smallest class). The dataset has an entropy-ratio of 0.91 (with an entropy of 3.7 out of a maximal entropy of 4.04). We publish the dataset here in the hopes of further promoting research on semantic utterance classification for goal-oriented dialogue systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 634,
"end": 641,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Algorithm 1: BalaGen",
"sec_num": null
},
{
"text": "In addition to evaluating BalaGen on the CQA dataset, we also applied it on ten Semantic Utterance Classifier (SUC) datasets used to train real-life goal-oriented dialogue systems. Figure 3 present class distribution of the 10 SUC datasets, demonstring their imbalance state and hence, the need for data balancing. Indeed, these datasets, are characterized by a high average balance-ratio of 1:222. The median number of classes in these datasets is 100 (std = 66), and median samples per class is 69 (std = 91).",
"cite_spans": [],
"ref_spans": [
{
"start": 181,
"end": 189,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of SUC Corpora",
"sec_num": "4.2"
},
{
"text": "Datasets \u2022 Airline Travel Information Systems (ATIS) 2queries on flight-related information, widely used in language understanding research. ATIS is the most imbalanced dataset; it has an entropy of 1.11. This is due to most of its the data belonging to the 'flight' class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Generative models: To assess the influence of the quality of the generated samples we used three text generation methods: EDA (Wei and Zou, 2019), Markov Chain (MC) (Barbieri et al., 2012) , and Generative Pre-Train (GPT-2) (Radford et al., 2019) . GPT-2 was further used for most of the experiments as it is considered to be superior in many textual tasks. To these, we added sample-copy as a baseline over-sampling method.",
"cite_spans": [
{
"start": 165,
"end": 188,
"text": "(Barbieri et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 224,
"end": 246,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Weak labeling: We examined various weak labeling methods, and used them to select generated samples in step (5):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "\u2022 No weak labeling -assign the class used by the generator to generate the sample as the final class. \u2022 Double voting -train a labeler classifier on the original train dataset. Use it to weakly label the generated samples, and only keep those samples where the label of the original sample matches the weak label of the generated sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "\u2022 Labeler ensemble -train an ensemble of labelers. For each apply the double voting mechanism and then aggregate the generated samples from all labelers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "BalaGen's components training input: Because data-balancing is beneficial for classification performance, we examine the effect of also balancing the input for the framework components -the generator and the labelers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Evaluation metrics: To report our experimental results, we used the standard accuracy measure which calculates the correct prediction ratio (Eq. 1). Since we deal with imbalanced datasets, we also report the macro accuracy (Eq. 2), which measures the average correct prediction ratio across classes (Manning et al., 2008) . Formally,",
"cite_spans": [
{
"start": 299,
"end": 321,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "acc micro = n i=1 t i |D| (1) acc macro = 1 n n i=1 t i |c i | (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "where t i is the number of correct predictions in class c i , |D| is the number of samples, and n is the number of classes. Additionally, we report the entropy measure, similarly to Shannon's diversity index (Shannon, 1951) to capture the degree of class imbalance in the dataset.",
"cite_spans": [
{
"start": 208,
"end": 223,
"text": "(Shannon, 1951)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H = \u2212 n i=1 |c i | |D| \u2022 log |c i | |D|",
"eq_num": "(3)"
}
],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Where applicable, we statistically validated our results with the McNemar test (McNemar, 1947) .",
"cite_spans": [
{
"start": 79,
"end": 94,
"text": "(McNemar, 1947)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "BalaGen is classifier independent. In our implementation we use BERT, a state-of-the-art classifier for textual classification (Devlin et al., 2018) , both as a classifier and for weak supervision. We divided each dataset into 80%:10%:10% for train, validation and test, respectively. The validation set was used for early stopping and for tuning parameters such as \u03b2 low and \u03b2 high . Each experiment was repeated at least 3 times to ensure consistency.",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5.1.1"
},
{
"text": "We restrict the number of generated samples by the generator to be 3 \u00d7 |c n |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5.1.1"
},
{
"text": "In our experiments, we balanced the training data for the generator and labelers using simple sample-copy over-sampling and random-selection under-sampling. Additional technical implementation details are given in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5.1.1"
},
{
"text": "In all experiments we compare classifier performance against the same held-out test set. Unless stated otherwise, we use GPT-2 as the generator and three BERT classifiers as labeler-ensemble. All model training was done on a balanced dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "In the first experiment we compared data augmentation (via generation) to na\u00efve data balancing. Specifically, we compared baseline results to: (1) balancing w/o augmentation; (2) augmentation w/o balancing; and (3) balancing-via-augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmentation vs. Balancing",
"sec_num": "5.2.1"
},
{
"text": "For balancing experiments (no. 1 and 3), We used the simplest balancing scheme depicted by Na\u00efve-OS balancing policy C (B low = B high = |c n |, as defined in Section 3). Specifically, for balancing w/o augmentation (1) we used basic sample-copy over-sampling, and for balancing-viaaugmentation (3) we applied BalaGen (using GPT-2 as generator) to generate additional samples according to policy C. For augmentation w/o balancing (2) we applied BalaGen using Augment-only data policy B -adding a fixed number of generated samples to all classes. Table 3 presents the micro and macro accuracy measures for the three datasets. While balancing and augmentation increase the accuracy for all three datasets, combining them yields significantly higher results than the baseline for CQA and SEAQ. For ATIS the combination of augmentation and balancing using na\u00efve data balancing policy C was not significantly better than the baseline and was even lower than the simple sample-copy oversampling balancing. ATIS is a highly imbalanced dataset, which requires an enormous amount of generated data to fully balance it and adhere to balancing policy C. Hence, as shown in the next section, other data balancing policies achieve better accuracy results on this dataset. Balancing was performed using Na\u00efve-OS balancing policy C. Augmentation alone was performed using Augment-only policy B.",
"cite_spans": [],
"ref_spans": [
{
"start": 546,
"end": 553,
"text": "Table 3",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Augmentation vs. Balancing",
"sec_num": "5.2.1"
},
{
"text": "Generated samples often differ in their quality from the original set of samples. Moreover, different generation algorithms differ in the quality of their generated samples (Kumar et al., 2020) . This disparity presents a trade-off between the quantity of added samples and their quality. Partial-OS balancing policy D (as shown in Figure 2 .D) enables to address this trade-off by adding generated samples up to a certain B low balancing level. Figure 4 illustrates macro accuracy for different text generation methods while setting the balancing threshold B low , such that \u03b2 low = [0, 10, 30, 50, 70, 80, 90, 95 and 100]% (namely, the percentage of classes that are treated as under-represented). First we observe that for all generation methods, there is a drop in accuracy towards \u03b2 low = 100%. This shows our first key finding, that augmenting all classes up to |c n | is a sub-optimal policy, in most cases, even for more advanced generation methods. Notably, the analysis of CQA and ATIS datasets also support this claim (not shown).",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF22"
},
{
"start": 584,
"end": 587,
"text": "[0,",
"ref_id": null
},
{
"start": 588,
"end": 591,
"text": "10,",
"ref_id": null
},
{
"start": 592,
"end": 595,
"text": "30,",
"ref_id": null
},
{
"start": 596,
"end": 599,
"text": "50,",
"ref_id": null
},
{
"start": 600,
"end": 603,
"text": "70,",
"ref_id": null
},
{
"start": 604,
"end": 607,
"text": "80,",
"ref_id": null
},
{
"start": 608,
"end": 611,
"text": "90,",
"ref_id": null
},
{
"start": 612,
"end": 614,
"text": "95",
"ref_id": null
}
],
"ref_spans": [
{
"start": 332,
"end": 340,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 446,
"end": 454,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Exploring Partial Over-Sampling Using Different Generative Models",
"sec_num": "5.2.2"
},
{
"text": "Observing the general trend we noticed that GPT-2 dominates all other generation methods for most configurations, followed by EDA, and then samplecopy. Markov Chain (MC), which was the preferred algorithm in (Akkaradamrongrat et al., 2019) showed worse performance than sample-copy (the baseline over-sampling approach) for most B low thresholds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring Partial Over-Sampling Using Different Generative Models",
"sec_num": "5.2.2"
},
{
"text": "Another observation was that there is a correlation between climax's B low threshold and the quality of the generation method. GPT-2, the most advanced generation method, reaches its highest accuracy when generating with \u03b2 low = 80%, followed by EDA at 70% and sample-copy at 50%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploring Partial Over-Sampling Using Different Generative Models",
"sec_num": "5.2.2"
},
{
"text": "In the following experiment, we compared baseline results to BalaGen's performance employing Na\u00efve-OS, Partial-OS, and Partial-OS-US balancing policies as depicted in Figure 2 . Partial-OS balancing policy (\u03b2 low < 100) appears to be superior for all datasets. Specifically, for CQA \u03b2 low = 90, and for SEAQ and ATIS \u03b2 low = 80. For the CQA and ATIS datasets, undersampling the over-represented classes was shown to be beneficial with \u03b2 high = 5. Notably, both entropy values increase and number of added samples decrease in correlation with the accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation of Balancing Policies",
"sec_num": "5.2.3"
},
{
"text": "CQA and ATIS datasets are highly unbalanced (as shown in Table 2 ). Hence, removing samples from their highly-represented classes was shown to further improve the accuracy. Figure 5 shows the number of samples added to (or removed from) each of the CQA classes in this experiment. There are classes that were not augmented with enough samples even for Partial-OS policy D with B low < |c n |. This strengthens the need to under-sample the over-represented classes down to B high to achieve an even more balanced dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 173,
"end": 181,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Evaluation of Balancing Policies",
"sec_num": "5.2.3"
},
{
"text": "All in all we see a significant increase in performance for all datasets when comparing the best balancing policy to the baseline (p \u2212 value < 0.1): CAQ presents a relative increase of (21.3%, 19.8%) in micro and macro accuracy respectively (comparing to optimal values) when applying Partial-OS-US policy E. For the SEAQ dataset we saw an overall increase of (24.8%, +25.3%) in micro and macro accuracy respectively when applying Partial-OS policy D. Lastly, the ATIS dataset classification results also improved, showing an increase of (50%, 57.9%) in micro accuracy and macro accuracy while applying Partial-OS-US policy E. Interestingly, in ATIS dataset, number of samples in policy E is smaller than the baseline while improving performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Balancing Policies",
"sec_num": "5.2.3"
},
{
"text": "The above significant increase in performance indicates our second key finding, that balancing datasets using BalaGen yields significantly improved classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Balancing Policies",
"sec_num": "5.2.3"
},
{
"text": "While establishing that balanced dataset is beneficial for classification performance, we examined the effect of balancing the input to the generation and labelers models. After applying the best balancing policy, as described in the previous section, our results showed that balancing all network components improved results by an average increase of 12.4% in micro accuracy and an average increase of 24% in macro accuracy. (Detailed results are given in the Appendix). Thus, our third key finding is that holistically balancing BalaGen, including all its components, yields best performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balanced Input for Model Training",
"sec_num": "5.2.4"
},
{
"text": "Finally, we evaluated different weak supervision mechanisms and found that the ensemble of labelers performs best as shown in Table 5 . This leads to our fourth key finding that a weak supervision mechanism aids class label preservation.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 5",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Weak Supervision Mechanism Analysis",
"sec_num": "5.2.5"
},
{
"text": "As a last experiment, and to further validate our findings, we applied BalaGen on 10 real-life SUC datasets. Table 6 shows number of classes and samples per dataset as well as relative improvement for these datasets. BalaGen markedly improved macro accuracy with relative increase of 11% (comparing to the optimal). Micro accuracy increased by 3.8%. Entropy increased by 5.6%. As expected, the preferred balancing policy for all datasets is \u03b2 low < 100. Additionally, half of the datasets reached best performance with \u03b2 high = 5 (for the rest we did not use under-sampling). It is worth noting that for two data sets (2 and 9) results show a trade-off between improving the macro accuracy at the expense of the micro one. In the end the decision about which metric to use in such cases depends on the gain from not missing out on the minority classes that may cost a small drop in the majority classes (which may still end up with relative CQA SEAQ ATIS None (78.8, 75.7) (58.3, 57.5) (98.5, 92.4) Dbl.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 6",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "BalaGen Improving Real-Life SUC Corpora",
"sec_num": "5.2.6"
},
{
"text": "(81.5, 75.4) (59.1, 57.8) (98.2, 95.1) Ens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BalaGen Improving Real-Life SUC Corpora",
"sec_num": "5.2.6"
},
{
"text": "(82.1, 77.5) (61, 59.9) (98.7, 96.6) high performance) that the system owner should weigh. Further, we evaluated the classifier performance on the generated sentences alone (following (Wang et al., 2019) ), without the train set, and found that micro accuracy falls by 17.5% and macro accuracy by 7.9%. This metric represents how well the generated dataset represents the train set. This interesting finding should be further researched together with the diversity of the entire corpus.",
"cite_spans": [
{
"start": 184,
"end": 203,
"text": "(Wang et al., 2019)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BalaGen Improving Real-Life SUC Corpora",
"sec_num": "5.2.6"
},
{
"text": "In this work we present BalaGen, a balancing-viageneration framework. We show that balancing textual datasets via generation is a promising technique. Furthermore our analysis reveals that the optimal balancing policy depends on the quality of the generated samples, the weak supervision mechanism applied, and the training of BalaGen's internal component. i.e., the generator and labelers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "In Balagen we assume that each sample contributes the same gain to its class accuracy. A possible enhancement of BalaGen could take into account not only the number of samples in each class, but also their quality. Alternatively, balancing policies could also consider class accuracy. Additional enhancements for BalaGen could include employing more advanced under-sampling technique such as data cleaning (Branco et al., 2016) , cluster-based under-sampling (Song et al., 2016) , or other distribution based techniques (Cui et al., 2019) .",
"cite_spans": [
{
"start": 406,
"end": 427,
"text": "(Branco et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 459,
"end": 478,
"text": "(Song et al., 2016)",
"ref_id": "BIBREF39"
},
{
"start": 520,
"end": 538,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "BalaGen can also be used to explore setting \u03b2 low > 100. Additional enhancements may also include investigating more sophisticated weak labeling ensemble mechanisms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "We focused our evaluation on the Semantic Utterance Classification (SUC) domain which is characterized by highly imbalanced data. However, it is desirable to validate the applicability of our general balancing approach on other textual domains. (176, 4338) (-3.5, 6) 13.9 -997 10 (224, 3776) (6.3, 9) 3.7 453 Avg. (110, 5772) (3.8, 11) 5.6 2404 ",
"cite_spans": [
{
"start": 314,
"end": 335,
"text": "(110, 5772) (3.8, 11)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "6"
},
{
"text": "www.kaggle.com/siddhadev/atis-dataset-from-ms-cntk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/transformers 4 https://github.com/allenai/allennlp 5 https://github.com/jsvine/markovify",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Mitch Mason, Senior Offering manager, IBM Watson Assistant, for his support and collaboration in creating the CQA data set. Additionally, we thank Inbal Ronen and Ofer Lavi for their usefull comments on the manuscript.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "In the following, we provide parameters related to training the models of GPT-2 in Table 9 and Bert in Table 8. Auxiliary experimental results in Table 7 . In addition, we provide a snippet of the CQA dataset we introduced in this work in Table 1 .We used the transformers 3 Python package (Wolf et al., 2019) for GPT-2 (345M parameters) implementation, and Allen-NLP 4 (Gardner et al., 2017) as a training framework that contains BERT implementation. We used model perplexity and accuracy on the validation set as a train stopping criteria for GPT-2 and BERT, respectively. Specifically, we used BERT base as classifier in all our experiments. A Markov chain was implemented using the Markovify 5 package.We employed a single NVIDIA Tesla V100-SXM3 32GB GPU in all our experiments. The typical time for GPT-2 overall training was about 20 sec per 1K samples. The generation time was 200 seconds per 1K samples, and the BERT overall training time was about 7 minutes per 1K samples (50 epochs with 20 patient epochs).",
"cite_spans": [
{
"start": 292,
"end": 311,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF48"
},
{
"start": 372,
"end": 394,
"text": "(Gardner et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 83,
"end": 155,
"text": "Table 9 and Bert in Table 8. Auxiliary experimental results in Table 7",
"ref_id": null
},
{
"start": 241,
"end": 248,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Text generation for imbalanced text classification",
"authors": [
{
"first": "Sukree",
"middle": [],
"last": "Kachamas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sinthupinyo",
"suffix": ""
}
],
"year": 2019,
"venue": "16th International Joint Conference on Computer Science and Software Engineering (JCSSE)",
"volume": "",
"issue": "",
"pages": "181--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kachamas, and Sukree Sinthupinyo. 2019. Text generation for imbalanced text classification. In 2019 16th International Joint Conference on Com- puter Science and Software Engineering (JCSSE), pages 181-186. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Naama Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! arXiv preprint",
"authors": [
{
"first": "Ateret",
"middle": [],
"last": "Anaby-Tavor",
"suffix": ""
},
{
"first": "Boaz",
"middle": [],
"last": "Carmeli",
"suffix": ""
},
{
"first": "Esther",
"middle": [],
"last": "Goldbraich",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kour",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03118"
]
},
"num": null,
"urls": [],
"raw_text": "Ateret Anaby-Tavor, Boaz Carmeli, Esther Goldbraich, Amir Kantor, George Kour, Segev Shlomov, Naama Tepper, and Naama Zwerdling. 2019. Not enough data? deep learning to the rescue! arXiv preprint arXiv:1911.03118.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Markov constraints for generating lyrics with style",
"authors": [
{
"first": "Gabriele",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Pachet",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Mirko Degli",
"middle": [],
"last": "Esposti",
"suffix": ""
}
],
"year": 2012,
"venue": "Ecai",
"volume": "242",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriele Barbieri, Fran\u00e7ois Pachet, Pierre Roy, and Mirko Degli Esposti. 2012. Markov constraints for generating lyrics with style. In Ecai, volume 242, pages 115-120.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A study of the behavior of several methods for balancing machine learning training data",
"authors": [
{
"first": "Eapa",
"middle": [],
"last": "Gustavo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batista",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ronaldo",
"suffix": ""
},
{
"first": "Maria",
"middle": [
"Carolina"
],
"last": "Prati",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Monard",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "6",
"issue": "1",
"pages": "20--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gustavo EAPA Batista, Ronaldo C Prati, and Maria Carolina Monard. 2004. A study of the be- havior of several methods for balancing machine learning training data. ACM SIGKDD explorations newsletter, 6(1):20-29.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The ravenclaw dialog management framework: Architecture and systems",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Bohus",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"I"
],
"last": "Rudnicky",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer Speech & Language",
"volume": "23",
"issue": "3",
"pages": "332--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Bohus and Alexander I Rudnicky. 2009. The ravenclaw dialog management framework: Architec- ture and systems. Computer Speech & Language, 23(3):332-361.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A survey of predictive modeling under imbalanced distributions",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Torgo",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Ribeiro",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Comput. Surv",
"volume": "49",
"issue": "2",
"pages": "1--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula Branco, Luis Torgo, and Rita Ribeiro. 2016. A survey of predictive modeling under imbalanced dis- tributions. ACM Comput. Surv, 49(2):1-31.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A systematic study of the class imbalance problem in convolutional neural networks",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Buda",
"suffix": ""
},
{
"first": "Atsuto",
"middle": [],
"last": "Maki",
"suffix": ""
},
{
"first": "Maciej A",
"middle": [],
"last": "Mazurowski",
"suffix": ""
}
],
"year": 2018,
"venue": "Neural Networks",
"volume": "106",
"issue": "",
"pages": "249--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Smote: synthetic minority over-sampling technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W Philip",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of artificial intelligence research",
"volume": "16",
"issue": "",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Special issue on learning from imbalanced data sets",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Chawla",
"suffix": ""
},
{
"first": "Aleksander",
"middle": [],
"last": "Japkowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotcz",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM SIGKDD explorations newsletter",
"volume": "6",
"issue": "1",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V Chawla, Nathalie Japkowicz, and Aleksander Kotcz. 2004. Special issue on learning from im- balanced data sets. ACM SIGKDD explorations newsletter, 6(1):1-6.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Class-balanced loss based on effective number of samples",
"authors": [
{
"first": "Yin",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Menglin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "9268--9277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. 2019. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9268-9277.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A multiple resampling method for learning from imbalanced data sets",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Estabrooks",
"suffix": ""
},
{
"first": "Taeho",
"middle": [],
"last": "Jo",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Japkowicz",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational intelligence",
"volume": "20",
"issue": "1",
"pages": "18--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Estabrooks, Taeho Jo, and Nathalie Japkowicz. 2004. A multiple resampling method for learning from imbalanced data sets. Computational intelli- gence, 20(1):18-36.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wordnet. The encyclopedia of applied linguistics",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 2012. Wordnet. The encyclope- dia of applied linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "On the class imbalance problem",
"authors": [
{
"first": "Xinjian",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yilong",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Cailing",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Gongping",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Guangtong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2008,
"venue": "Fourth international conference on natural computation",
"volume": "4",
"issue": "",
"pages": "192--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinjian Guo, Yilong Yin, Cailing Dong, Gongping Yang, and Guangtong Zhou. 2008. On the class imbalance problem. In 2008 Fourth international conference on natural computation, volume 4, pages 192-201. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A deep generative framework for paraphrase generation",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Prawaan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1709.05074"
]
},
"num": null,
"urls": [],
"raw_text": "Ankush Gupta, Arvind Agarwal, Prawaan Singh, and Piyush Rai. 2017. A deep generative frame- work for paraphrase generation. arXiv preprint arXiv:1709.05074.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Faq retrieval using attentive matching",
"authors": [
{
"first": "Sparsh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vitor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carvalho",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "929--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sparsh Gupta and Vitor R Carvalho. 2019. Faq re- trieval using attentive matching. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 929-932.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adasyn: Adaptive synthetic sampling approach for imbalanced learning",
"authors": [
{
"first": "Haibo",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Edwardo",
"middle": [
"A"
],
"last": "Garcia",
"suffix": ""
},
{
"first": "Shutao",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE international joint conference on neural networks (IEEE world congress on computational intelligence)",
"volume": "",
"issue": "",
"pages": "1322--1328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haibo He, Yang Bai, Edwardo A Garcia, and Shutao Li. 2008. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelli- gence), pages 1322-1328. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The second dialog state tracking challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 15th annual meeting of the special interest group on discourse and dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "263--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014. The second dialog state tracking challenge. In Proceedings of the 15th annual meet- ing of the special interest group on discourse and dialogue (SIGDIAL), pages 263-272.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The class imbalance problem: Significance and strategies",
"authors": [
{
"first": "Nathalie",
"middle": [],
"last": "Japkowicz",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the Int'l Conf. on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathalie Japkowicz. 2000. The class imbalance prob- lem: Significance and strategies. In Proc. of the Int'l Conf. on Artificial Intelligence. Citeseer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The class imbalance problem: A systematic study",
"authors": [
{
"first": "Nathalie",
"middle": [],
"last": "Japkowicz",
"suffix": ""
},
{
"first": "Shaju",
"middle": [],
"last": "Stephen",
"suffix": ""
}
],
"year": 2002,
"venue": "telligent data analysis",
"volume": "6",
"issue": "",
"pages": "429--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathalie Japkowicz and Shaju Stephen. 2002. The class imbalance problem: A systematic study. In- telligent data analysis, 6(5):429-449.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Contextual augmentation: Data augmentation by words with paradigmatic relations",
"authors": [
{
"first": "Sosuke",
"middle": [],
"last": "Kobayashi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Data augmentation using pre-trained transformer models",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Eunah",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02245"
]
},
"num": null,
"urls": [],
"raw_text": "Varun Kumar, Ashutosh Choudhary, and Eunah Cho. 2020. Data augmentation using pre-trained trans- former models. arXiv preprint arXiv:2003.02245.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.13461"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Imbalanced text classification: A term weighting approach",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Han",
"middle": [
"Tong"
],
"last": "Loh",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Expert systems with Applications",
"volume": "36",
"issue": "1",
"pages": "690--701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Liu, Han Tong Loh, and Aixin Sun. 2009. Imbalanced text classification: A term weight- ing approach. Expert systems with Applications, 36(1):690-701.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Note on the sampling error of the difference between correlated proportions or percentages",
"authors": [
{
"first": "Quinn",
"middle": [],
"last": "Mcnemar",
"suffix": ""
}
],
"year": 1947,
"venue": "Psychometrika",
"volume": "12",
"issue": "2",
"pages": "153--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Machine learning from imbalanced data sets 101",
"authors": [
{
"first": "Foster",
"middle": [],
"last": "Provost",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the AAAI'2000 workshop on imbalanced data sets",
"volume": "68",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Foster Provost. 2000. Machine learning from im- balanced data sets 101. In Proceedings of the AAAI'2000 workshop on imbalanced data sets, vol- ume 68, pages 1-3. AAAI Press.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Imbalanced dataset classification and solutions: a review",
"authors": [
{
"first": "D",
"middle": [],
"last": "Ramyachitra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manikandan",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Computing and Business Research (IJCBR)",
"volume": "5",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Ramyachitra and P Manikandan. 2014. Imbalanced dataset classification and solutions: a review. In- ternational Journal of Computing and Business Re- search (IJCBR), 5(4).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2020,
"venue": "The VLDB Journal",
"volume": "29",
"issue": "2",
"pages": "709--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2020. Snorkel: Rapid training data creation with weak su- pervision. The VLDB Journal, 29(2):709-730.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Augment to prevent: short-text data augmentation in deep learning for hate-speech classification",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Rizos",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Hemker",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "991--1000",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios Rizos, Konstantin Hemker, and Bj\u00f6rn Schuller. 2019. Augment to prevent: short-text data augmentation in deep learning for hate-speech clas- sification. In Proceedings of the 28th ACM Inter- national Conference on Information and Knowledge Management, pages 991-1000.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A review of multi-class classification for imbalanced data",
"authors": [
{
"first": "Mahendra",
"middle": [],
"last": "Sahare",
"suffix": ""
},
{
"first": "Hitesh",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2012,
"venue": "International Journal of Advanced Computer Research",
"volume": "2",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahendra Sahare and Hitesh Gupta. 2012. A review of multi-class classification for imbalanced data. Inter- national Journal of Advanced Computer Research, 2(3):160.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Faq retrieval using queryquestion similarity and bert-based query-answer relevance",
"authors": [
{
"first": "Wataru",
"middle": [],
"last": "Sakata",
"suffix": ""
},
{
"first": "Tomohide",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Ribeka",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1113--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. Faq retrieval using query- question similarity and bert-based query-answer rel- evance. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1113-1116.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Enriching complex networks with word embeddings for detecting mild cognitive impairment from speech transcripts",
"authors": [
{
"first": "Leandro",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Edilson Anselmo Corr\u00eaa",
"middle": [],
"last": "J\u00fanior",
"suffix": ""
},
{
"first": "Osvaldo",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "Diego",
"middle": [],
"last": "Amancio",
"suffix": ""
},
{
"first": "Let\u00edcia",
"middle": [],
"last": "Mansur",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1284--1296",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1118"
]
},
"num": null,
"urls": [],
"raw_text": "Leandro Santos, Edilson Anselmo Corr\u00eaa J\u00fanior, Os- valdo Oliveira Jr, Diego Amancio, Let\u00edcia Mansur, and Sandra Alu\u00edsio. 2017. Enriching complex net- works with word embeddings for detecting mild cog- nitive impairment from speech transcripts. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1284-1296, Vancouver, Canada. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Intent classification for dialogue utterances",
"authors": [],
"year": 2019,
"venue": "IEEE Intelligent Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jetze Schuurmans and Flavius Frasincar. 2019. Intent classification for dialogue utterances. IEEE Intelli- gent Systems.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Prediction and entropy of printed english. Bell system technical journal",
"authors": [
{
"first": "Claude",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1951,
"venue": "",
"volume": "30",
"issue": "",
"pages": "50--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude E Shannon. 1951. Prediction and entropy of printed english. Bell system technical journal, 30(1):50-64.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A bi-directional sampling based on k-means method for imbalance text classification",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xianglin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Sijun",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE/ACIS 15th International Conference on Computer and Information Science (ICIS)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Song, Xianglin Huang, Sijun Qin, and Qing Song. 2016. A bi-directional sampling based on k-means method for imbalance text classification. In 2016 IEEE/ACIS 15th International Conference on Com- puter and Information Science (ICIS), pages 1-5. IEEE.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "On strategies for imbalanced text classification using svm: A comparative study. Decision Support Systems",
"authors": [
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "48",
"issue": "",
"pages": "191--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aixin Sun, Ee-Peng Lim, and Ying Liu. 2009. On strategies for imbalanced text classification using svm: A comparative study. Decision Support Sys- tems, 48(1):191-201.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Intent determination and spoken utterance classification. Spoken language understanding: systems for extracting semantic information from speech",
"authors": [
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "93--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gokhan Tur and Li Deng. 2011. Intent determina- tion and spoken utterance classification. Spoken language understanding: systems for extracting se- mantic information from speech. Wiley, Chichester, pages 93-118.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Towards deeper understanding: Deep convex networks for semantic utterance classification",
"authors": [
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE international conference on acoustics, speech and signal processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5045--5048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gokhan Tur, Li Deng, Dilek Hakkani-T\u00fcr, and Xi- aodong He. 2012. Towards deeper understanding: Deep convex networks for semantic utterance classi- fication. In 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5045-5048. IEEE.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Diversity analysis on imbalanced data sets by using ensemble models",
"authors": [
{
"first": "Shuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2009,
"venue": "2009 IEEE Symposium on Computational Intelligence and Data Mining",
"volume": "",
"issue": "",
"pages": "324--331",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuo Wang and Xin Yao. 2009. Diversity analysis on imbalanced data sets by using ensemble models. In 2009 IEEE Symposium on Computational Intelli- gence and Data Mining, pages 324-331. IEEE.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Bilateral multi-perspective matching for natural language sentences",
"authors": [
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03814"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural lan- guage sentences. arXiv preprint arXiv:1702.03814.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Is artificial data useful for biomedical natural language processing algorithms?",
"authors": [
{
"first": "Zixu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Ive",
"suffix": ""
},
{
"first": "Sumithra",
"middle": [],
"last": "Velupillai",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "240--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zixu Wang, Julia Ive, Sumithra Velupillai, and Lucia Specia. 2019. Is artificial data useful for biomedi- cal natural language processing algorithms? In Pro- ceedings of the 18th BioNLP Workshop and Shared Task, pages 240-249.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Eda: Easy data augmentation techniques for boosting performance on text classification tasks",
"authors": [
{
"first": "W",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.11196"
]
},
"num": null,
"urls": [],
"raw_text": "Jason W Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting perfor- mance on text classification tasks. arXiv preprint arXiv:1901.11196.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Understanding data augmentation for classification: when to warp",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sebastien",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"D"
],
"last": "Stamatescu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdonnell",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 international conference on digital image computing: techniques and applications (DICTA)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. 2016. Understanding data aug- mentation for classification: when to warp? In 2016 international conference on digital image comput- ing: techniques and applications (DICTA), pages 1- 6. IEEE.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Conditional bert contextual augmentation",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Shangwen",
"middle": [],
"last": "Lv",
"suffix": ""
},
{
"first": "Liangjun",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Jizhong",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Songlin",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Computational Science",
"volume": "",
"issue": "",
"pages": "84--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong Han, and Songlin Hu. 2019. Conditional bert contextual augmentation. In International Conference on Com- putational Science, pages 84-95. Springer.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Building task-oriented dialogue systems for online shopping",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jianshe",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhoujun",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao Yan, Nan Duan, Peng Chen, Ming Zhou, Jianshe Zhou, and Zhoujun Li. 2017. Building task-oriented dialogue systems for online shopping. In Thirty- First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Feature selection for text categorization on imbalanced data",
"authors": [
{
"first": "Zhaohui",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Xiaoyun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rohini",
"middle": [],
"last": "Srihari",
"suffix": ""
}
],
"year": 2004,
"venue": "ACM Sigkdd Explorations Newsletter",
"volume": "6",
"issue": "1",
"pages": "80--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhaohui Zheng, Xiaoyun Wu, and Rohini Srihari. 2004. Feature selection for text categorization on imbal- anced data. ACM Sigkdd Explorations Newsletter, 6(1):80-89.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "An empirical comparison of techniques for the class imbalance problem in churn prediction",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Baesens",
"suffix": ""
},
{
"first": "Seppe",
"middle": [],
"last": "Klm Vanden Broucke",
"suffix": ""
}
],
"year": 2017,
"venue": "Information sciences",
"volume": "408",
"issue": "",
"pages": "84--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Zhu, Bart Baesens, and Seppe KLM vanden Broucke. 2017. An empirical comparison of tech- niques for the class imbalance problem in churn pre- diction. Information sciences, 408:84-99.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Balancing policies on an example dataset distribution: A. Baseline (no augmentation and no balancing) B. Augment-only (without balancing), C. Na\u00efve-OS (B low = B high = |c n |), D. Partial-OS (B low < B high = |c n |), E. Partial-OS-US (B low < B high < |c n |). Abbreviations: OS -over-sampling, US -undersampling, |c n | -number of samples in the largest class."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Imbalanced state of real-life Semantic Utterance Classifier (SUC) datasets. For each dataset, classes are aggregated into 20 bins, and median samples-per-class values are presented as a blue line. Median values for each bin over all datasets are presented as green bars. from Stack Exchange. Stack Exchange is a network of question-and-answer (QA) websites on topics in diverse fields. It is the most balanced dataset in our analysis with an entropy of 4.69."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Macro accuracy for different text generation methods over varied \u03b2 low values employing Partial-OS balancing policy D for SEAQ dataset."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Data augmentation with B, C, D and E balancing policies stating number of augmented and undersampled sentences for CQA dataset. The figure shows that in practice some classes are not fully augmented although their number of samples is below \u03b2 low . Additionally, advanced balancing techniques -i.e. applying policy E -result in a more balanced distribution of the augmented dataset."
},
"TABREF0": {
"html": null,
"text": "Flow diagram of BalaGen: Given dataset distribution D; (1) balancing policy is applied to determine [B low , B high ] band; (2) balanced D S is created for training BalaGen's components; (3) Language model is first trained, and then used to generate D * with synthetic samples for the U R classes; (4) Weak labeling models are trained and then used to label samples in D * ; (5) generated samples are selected according to their labels up to B low creating D syn ; (6) D is augmented with D syn and OR classes in D are under-sampled. O -over-sampling, U -under-sampling.",
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Input Data Balancing</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>(2)</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Balancing</td><td>Data Generation</td><td>Weak Labeling of</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Policy</td><td>for</td><td>Generated</td></tr><tr><td/><td/><td colspan=\"4\">Original Dataset</td><td/><td/><td/><td>(1)</td><td>classes</td><td>(3)</td><td>Samples</td><td>(4)</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Generated</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Samples</td></tr><tr><td/><td/><td colspan=\"5\">Balanced Dataset</td><td/><td/><td>(6)</td><td>Selection</td><td>(5)</td></tr><tr><td colspan=\"10\">B low Train Under-samples B high Figure 1: 40% 40% A B No Band Over-samples |Cn| = |C5|</td></tr><tr><td/><td/><td/><td/><td/><td>30%</td><td/><td/><td/><td>30%</td></tr><tr><td/><td/><td/><td/><td>15%</td><td/><td/><td/><td>15%</td></tr><tr><td/><td/><td>5%</td><td>10%</td><td/><td/><td/><td>5%</td><td>10%</td></tr><tr><td/><td/><td>C1</td><td>C2</td><td>C3</td><td>C4</td><td>C5</td><td/><td/></tr><tr><td>C</td><td/><td colspan=\"2\">Blow = Bhigh = |Cn|</td><td/><td>D</td><td colspan=\"2\">Bhigh = |Cn|</td><td>E</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Bhigh &lt; |Cn|</td></tr><tr><td/><td/><td/><td>40%</td><td/><td colspan=\"2\">Blow &lt; |Cn|</td><td/><td>40%</td><td>Blow &lt; |Cn|</td></tr><tr><td/><td/><td>30%</td><td/><td/><td/><td/><td>30%</td><td/><td>30%</td><td>40%</td></tr><tr><td>5%</td><td>10%</td><td>15%</td><td/><td/><td>5%</td><td>10%</td><td>15%</td><td colspan=\"2\">5%</td><td>10%</td><td>15%</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "shows examples of intents and utterances from the dataset. The dataset contains 884 user utterances, divided into 57 intents (classes) as shown",
"num": null,
"content": "<table><tr><td>Intent</td><td>Sample Utterances</td></tr><tr><td colspan=\"2\">Quarantine \u2022 Can my friends visit me?</td></tr><tr><td>visits</td><td>\u2022 What is a safe distance when</td></tr><tr><td/><td>someone brings me groceries?</td></tr><tr><td>COVID</td><td>\u2022 What does covid stand for?</td></tr><tr><td colspan=\"2\">Description \u2022 How does the virus spread</td></tr><tr><td>Case</td><td>\u2022 How many coronavirus cases</td></tr><tr><td>Count</td><td>are there in my area?</td></tr><tr><td/><td>\u2022 How many ppl are infected in</td></tr><tr><td/><td>the us?</td></tr><tr><td>Symptoms</td><td>\u2022 What are the early symptoms</td></tr><tr><td/><td>of covid-19?</td></tr><tr><td/><td>\u2022 How to distinguish it from a</td></tr><tr><td/><td>common cold</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>describes the datasets used in</td></tr><tr><td>our experiments:</td></tr><tr><td>\u2022 COVID-19 QA (CQA) -new dataset introduced in</td></tr><tr><td>Section 4.</td></tr><tr><td>\u2022 Stack Exchange Frequently Asked Questions</td></tr><tr><td>(SEAQ) 1 -FAQ retrieval test collection extracted</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>: Datasets. Abbreviations: CQA -COVID-</td></tr><tr><td>19 Q&amp;A, SEAQ -StackExchange FAQ, ATIS -Flight</td></tr><tr><td>Reservations. # Classes -number of classes. H -en-</td></tr><tr><td>tropy.</td></tr></table>",
"type_str": "table"
},
"TABREF8": {
"html": null,
"text": "Augmentation vs. balancing effect.",
"num": null,
"content": "<table><tr><td>The ta-</td></tr></table>",
"type_str": "table"
},
"TABREF9": {
"html": null,
"text": "presents our findings. \u03b2 low and \u03b2 high values were chosen by hyper-parameters search on a validation set.",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">CQA</td><td/><td colspan=\"2\">SEAQ</td><td/><td/><td>ATIS</td><td/></tr><tr><td>Policy</td><td>acc</td><td>H</td><td>\u2206S</td><td>acc</td><td>H</td><td>\u2206S</td><td>acc</td><td>H</td><td>\u2206S</td></tr><tr><td>A. Baseline</td><td colspan=\"2\">(77.3, 71.9) 3.7</td><td>0</td><td colspan=\"2\">(48.2, 46.2) 4.7</td><td>0</td><td colspan=\"2\">(97.4, 91.9) 1.1</td><td>0</td></tr><tr><td>C. Na\u00efve-OS</td><td colspan=\"9\">(80.9, 74.7) 3.9 1150 (55.5, 54.6) 4.8 1440 (98.2, 92.2) 1.4 1662</td></tr><tr><td>D. Partial-OS</td><td>(80.9, 75.5)</td><td>4</td><td>670</td><td>(61, 59.9)</td><td colspan=\"2\">4.8 642</td><td colspan=\"3\">(98.6, 96.6) 1.8 1170</td></tr><tr><td colspan=\"2\">E. Partial-OS-US (82.1, 77.5)</td><td>4</td><td>619</td><td>(61, 59.9)</td><td colspan=\"2\">4.8 642</td><td colspan=\"3\">(98.7, 96.6) 2.7 -1704</td></tr></table>",
"type_str": "table"
},
"TABREF10": {
"html": null,
"text": "Balancing policy effect. Showing micro accuracy, macro accuracy, entropy and change in number of samples. Abbreviations: acc -both (acc micro , acc macro ) values. H -entropy. \u2206S = |D Balanced | \u2212 |D T rain |",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF11": {
"html": null,
"text": "Weak supervision mechanism effect showing (acc micro , acc macro ). Dbl. -double voting with single labeler. Ens. -Ensemble.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF13": {
"html": null,
"text": "BalaGen applied on 10 real-life SUC datasets. Showing (intents, samples), relative increase in (micro accuracy, macro accuracy), relative increase in entropy and change in number of samples. Abbreviations: %acc -(acc micro , acc macro ) relative increase. %H -relative increase in entropy, \u2206S = |D Balanced | \u2212 |D T rain |",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Balance labelers</td></tr><tr><td>Dataset</td><td>Balance generator</td><td>No</td><td>Yes</td></tr><tr><td/><td>No</td><td colspan=\"2\">(80.3,77.2) (78.8,74.5)</td></tr><tr><td>CQA</td><td>Yes</td><td colspan=\"2\">(80.9,77.4) (82.1,77.5)</td></tr><tr><td/><td>No</td><td colspan=\"2\">(56.1,54.7) (56.6,54.7)</td></tr><tr><td>SEAQ</td><td>Yes</td><td colspan=\"2\">(54.2,53.4) (61.0,59.9)</td></tr><tr><td/><td>No</td><td colspan=\"2\">(98.4,91.5) (98.4,94.8)</td></tr><tr><td>ATIS</td><td>Yes</td><td colspan=\"2\">(98.5,92.6) (98.7,96.6)</td></tr></table>",
"type_str": "table"
},
"TABREF14": {
"html": null,
"text": "Balancing generator input vs. balancing labelers inputs. Each tuple contains micro and macro accuracy measures",
"num": null,
"content": "<table><tr><td>Model Parameter</td><td>Value</td></tr><tr><td>model name</td><td>gpt2-medium</td></tr><tr><td>batch size</td><td>10</td></tr><tr><td>val every</td><td>5</td></tr><tr><td>example length</td><td>50</td></tr><tr><td>generate sample length</td><td>100</td></tr><tr><td>learning rate</td><td>1e-4</td></tr><tr><td>val batch count</td><td>80</td></tr><tr><td>patience</td><td>5</td></tr><tr><td>tf only train transformer layers</td><td>true</td></tr><tr><td>max generation attempts</td><td>50</td></tr><tr><td>optimizer</td><td>adam</td></tr></table>",
"type_str": "table"
},
"TABREF15": {
"html": null,
"text": "GPT-2 training and sampling parameters",
"num": null,
"content": "<table><tr><td>Model Parameters</td><td>Value</td></tr><tr><td>model name</td><td>bert-base-uncased</td></tr><tr><td>do lowercase</td><td>true</td></tr><tr><td>word splitter</td><td>bert-basic</td></tr><tr><td>top layer only</td><td>true</td></tr><tr><td>dropout p</td><td>0</td></tr><tr><td>batch size</td><td>8</td></tr><tr><td>num epochs</td><td>50</td></tr><tr><td>patience</td><td>20</td></tr><tr><td>grad clipping</td><td>5</td></tr><tr><td>optimizer</td><td>bert adam</td></tr><tr><td>learning rate</td><td>5e-5</td></tr><tr><td>warmup</td><td>0.1</td></tr></table>",
"type_str": "table"
},
"TABREF16": {
"html": null,
"text": "Bert Training parameters (used in all experiments)",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}