|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T10:38:59.318003Z" |
|
}, |
|
"title": "CLUSTERDATASPLIT: Exploring Challenging Clustering-Based Data Splits for Model Performance Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wecker", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ludwig-Maximilians-University", |
|
"location": { |
|
"settlement": "Munich", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Annemarie", |
|
"middle": [], |
|
"last": "Friedrich", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Heike", |
|
"middle": [], |
|
"last": "Adel", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper adds to the ongoing discussion in the natural language processing community on how to choose a good development set. Motivated by the real-life necessity of applying machine learning models to different data distributions, we propose a clustering-based data splitting algorithm. It creates development (or test) sets which are lexically different from the training data while ensuring similar label distributions. Hence, we are able to create challenging cross-validation evaluation setups while abstracting away from performance differences resulting from label distribution shifts between training and test data. In addition, we present a Python-based tool for analyzing and visualizing data split characteristics and model performance. We illustrate the workings and results of our approach using a sentiment analysis and a patent classification task.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper adds to the ongoing discussion in the natural language processing community on how to choose a good development set. Motivated by the real-life necessity of applying machine learning models to different data distributions, we propose a clustering-based data splitting algorithm. It creates development (or test) sets which are lexically different from the training data while ensuring similar label distributions. Hence, we are able to create challenging cross-validation evaluation setups while abstracting away from performance differences resulting from label distribution shifts between training and test data. In addition, we present a Python-based tool for analyzing and visualizing data split characteristics and model performance. We illustrate the workings and results of our approach using a sentiment analysis and a patent classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In natural language processing (NLP), the standard approach for tuning and selecting machine learning models is by means of using a held-out development set. However, recent work has pointed out that evaluation scores on a development set are often not indicative of the model performance on an unseen test set (Reimers and Gurevych, 2018; Zhou et al., 2020) . In addition, it is an open research question how to choose a good development set. While Gorman and Bedrick (2019) suggest to use random splits instead of a given benchmark development set, S\u00f8gaard et al. (2020) argue that randomly selecting a development set is not the best option either. This currently ongoing discussion in the NLP community highlights the need for more extensive research on model development using a variety of data splits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 311, |
|
"end": 339, |
|
"text": "(Reimers and Gurevych, 2018;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 340, |
|
"end": 358, |
|
"text": "Zhou et al., 2020)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 450, |
|
"end": 475, |
|
"text": "Gorman and Bedrick (2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 551, |
|
"end": 572, |
|
"text": "S\u00f8gaard et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we directly add to this discussion by proposing a strategy in which models are evalu-ated in a setup that is challenging given the available dataset. For machine learning models, it is of utmost importance that they are applicable to different data distributions, possibly even coming from different domains. We argue that models should also be tested under data splits reflecting such real-world settings. Therefore, we propose a clustering-based data splitting approach that creates data splits where the development or test data differ from the training set. Our clustering algorithm ensures a similar label distribution across the produced cross-validation folds in order to abstract away from challenges due to label distribution shifts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In addition, we present CLUSTERDATASPLIT, a suite of Jupyter notebooks implementing several possibilities for splitting data into training and development sets, or into folds for cross-validation. In addition, our tool provides functionalities for visualizing different data splits and thus may help clarifying their influence on model performance. Furthermore, it offers several ways to inspect the data, such as visualization of dataset key figures, scatter plots, label distributions and sentence length distributions. The tool is publicly available. 1 In sum, our contributions are as follows: (i) We propose a clustering-based data splitting algorithm that creates a challenging evaluation setup and has the potential to reveal difficulties when the model is applied on data that deviates from the training data (Section 3). (ii) We present CLUSTERDATASPLIT, a tool that allows to split data into training and development sets and provides different visualizations for analyzing the data splits as well as model performance (Section 4). Finally, we demonstrate a worked example of using our data inspection tool as well as results for our clustering-based data splitting methods for two sequence classification tasks, sentiment analysis and patent classification (Section 5).", |
|
"cite_spans": [ |
|
{ |
|
"start": 554, |
|
"end": 555, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section, we give an overview of related work on evaluation and data splitting techniques as well as analysis tools in the NLP community. Gorman and Bedrick (2019) show that model rankings on standard splits (Collins, 2002) can often not be reproduced using randomly generated splits. In a follow-up study, S\u00f8gaard et al. (2020) find that evaluation results on random splits are often too optimistic, even for in-domain test samples. In order to make the data splits more challenging, they introduce heuristic splits based on sentence length and adversarial splits based on Wasserstein distance. The clustering-based data split we propose in this paper follows the same idea of creating a challenging evaluation setup. In contrast to the adversarial splits proposed by S\u00f8gaard et al. (2020) , our splitting strategy controls for label distribution, allowing to abstract away from the effect of different label distributions on the evaluation score (Johnson and Khoshgoftaar, 2019; Buda et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "Gorman and Bedrick (2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 230, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 335, |
|
"text": "S\u00f8gaard et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 776, |
|
"end": 797, |
|
"text": "S\u00f8gaard et al. (2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 955, |
|
"end": 987, |
|
"text": "(Johnson and Khoshgoftaar, 2019;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 988, |
|
"end": 1006, |
|
"text": "Buda et al., 2018)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Another direction of work aims at tailoring more challenging test sets by either grouping or handcrafting datasets. These approaches typically require manual work for each dataset. Hendrycks et al. (2020) , for instance, introduce a robustness benchmark 2 by assigning similar datasets as out-ofdistribution (OOD) test sets. In their experiments, they show that the OOD test setting leads to severe performance drops for many models except transformers. Gardner et al. (2020) create contrast sets 3 for commonly used NLP benchmark datasets by adding hand-crafted data points for each test set example. Another direction of creating challenging evaluation sets comes from the idea of adversarial training (Szegedy et al., 2013; Goodfellow et al., 2015) . In the context of NLP, the creation of adversarial examples typically involves task and dataset specific methods and often relies on hand-crafted rules or other forms of human influence. Examples are reading comprehension or question answering datasets with altered questions or documents (Jia and Liang, 2017; Wallace et al., 2019) or machine translation datasets for which typos are introduced (Belinkov and Bisk, 2018) . In contrast to all those approaches, our data splitting approach is purely data-driven and creates a challenging evaluation setting within one dataset fully automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 204, |
|
"text": "Hendrycks et al. (2020)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 454, |
|
"end": 475, |
|
"text": "Gardner et al. (2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 704, |
|
"end": 726, |
|
"text": "(Szegedy et al., 2013;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 727, |
|
"end": 751, |
|
"text": "Goodfellow et al., 2015)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1043, |
|
"end": 1064, |
|
"text": "(Jia and Liang, 2017;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1065, |
|
"end": 1086, |
|
"text": "Wallace et al., 2019)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1150, |
|
"end": 1175, |
|
"text": "(Belinkov and Bisk, 2018)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Challenging Evaluation Sets", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Existing NLP model analysis tools are often tailored towards specific tasks or models (e.g., Wang et al., 2019; Zhou et al., 2020) . In the remainder of this section, we give examples for model-agnostic tools as they are more related to our tool. Grali\u0144ski et al. (2019) introduce GEVAL, 4 a tool for identifying features in the test set (e.g., n-grams) which are especially challenging to models. The tool CHECKLIST 5 by Ribeiro et al. (2020) explores different model capabilities, such as robustness, vocabulary or temporal understanding. Furthermore, it supports the creation of test examples via templates. In contrast to those two tools, our tool offers visualizations of a variety of statistically interesting aspects of data splits in order to better understand model behaviours. Wu et al. (2019) provide an interactive tool for error analysis called ERRUDITE. 6 It supports, i.a., automated counterfactual rewriting for testing hypotheses about errors. In contrast to all mentioned tools, our tool implements different data splitting techniques, making it easy to compare model performance when using different data splits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 111, |
|
"text": "Wang et al., 2019;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 130, |
|
"text": "Zhou et al., 2020)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 247, |
|
"end": 270, |
|
"text": "Grali\u0144ski et al. (2019)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 787, |
|
"end": 803, |
|
"text": "Wu et al. (2019)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 868, |
|
"end": 869, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tools for Analyzing NLP models", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We here propose a novel algorithm dubbed Size and Distribution Sensitive K-means (SDS Kmeans) which has two important properties relevant to generating challenging clustering-based cross-validation folds. The SDS K-means algorithm produces clusters that (1) each have approximately the same size, i.e., a similar number of data points, and that (2) are controlled for label distribution. In the default case, all clusters have a similar label distribution. By clustering the data points, we ensure that training and development data are different (for now in a lexical sense), hence creating a challenging evaluation setup. The SDS K-means algorithm thereby overcomes the following two difficulties: (1) Varying cluster sizes: If clusters had different sizes, performance differences could simply be attributed to varying amounts of training data. (2) Varying label distributions: If clusters had differing label distributions, performance differences could be primarily due to label distribution mismatches between training and test data. Hence, when using SDS K-means generated data folds with similar label distributions per cluster, differences in model performance can be attributed to qualitative rather than quantitative differences between the folds. In the experiments of this paper, we keep the label distribution fixed. If the user wants to deviate from the default case, s/he can also use the SDS K-means algorithm to generate folds with varying label distributions, and thereby also investigate the effects of different label distributions on model performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering-based Data Splitting", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the following, we describe technical details and the derivation of our algorithm. All algorithms described in this section produce K clusters that are intended to be used in K-fold cross-validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering-based Data Splitting", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "As a prerequisite for clustering, we transform the text data (sentences or clauses in our case) into vector representations. First, each token is turned into a vector representation using a pre-trained Word2Vec (Mikolov et al., 2013) model. Then, for each input example the word vectors are averaged. The vector representations are centered and scaled, and dimensionality reduction by principal component analysis is performed. The vectors obtained by these preparation steps then serve as input for the K-means based algorithms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 233, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the generation of clustering-based data splits, we decided to work with K-means based clustering algorithms because of their low time complexity and high computing efficiency (Xu and Tian, 2015) . The standard K-means algorithm (Lloyd, 1982) belongs to the group of partitioning clustering algorithms, i.e., the number of clusters to be formed needs to be specified beforehand. It is an expectation maximization algorithm that has the goal of minimizing the cluster-internal variances. As such, it iterates between an expectation step in which data points are assigned to clusters, and a maximization or update step in which cluster centers are re-calculated. The standard K-means algorithm produces clusters with strongly varying size and label distributions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 198, |
|
"text": "(Xu and Tian, 2015)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 232, |
|
"end": 245, |
|
"text": "(Lloyd, 1982)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "K-means and Size Sensitive K-means", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The Same Size K-means algorithm is a variant of the K-means algorithm that ensures that all clusters are assigned approximately the same number of data points. We implement this algorithm following a tutorial 7 by Schubert and Zimek (2019) . During the initial assignment step, points are assigned to the different cluster centers following an order measure, which corresponds to the difference in distance from the point to the closest and the furthest cluster center. This means that points which have the highest absolute difference in distance from closest to furthest cluster center are assigned to their closest cluster center first. Once one of the clusters reaches its maximum size, the order measure is re-calculated and points are again sorted before continuing the assignment process. In the following, the algorithm iterates between a maximization step in which cluster centers are recalculated and an update step that differs from standard K-means as follows. During the update step, data points can swap assigned clusters in a 1-on-1 fashion if the swap is associated with a decrease of the overall cluster-internal variances. While this algorithm ensures that cross-validation folds will be of equal size, the label distribution within the clusters may vary and hence result in favoring models that are misled by an unrealistic label distribution in the training or development data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 239, |
|
"text": "Schubert and Zimek (2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "K-means and Size Sensitive K-means", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Algorithm (SDS K-means)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Size and Distribution Sensitive K-means", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "As a remedy for the above mentioned problems, we propose an extension, the SDS K-means algorithm. Like the Same Size K-means algorithm, it consists of an initial assignment and swapping-based update steps. However, in this case, the maximum number of points per cluster are determined separately for each label, corresponding to the desired distribution of labels for each cluster as specified by the user. In the default case, the label distribution per cluster corresponds to the overall label distribution in the training data. The initial assignment and the update steps are conducted separately for each label. This ensures that the label distribution per cluster matches exactly the distribution specified by the user. The pseudo-code of the algorithm is outlined in Figure 1 . We initialize the algorithm multiple times and choose the run with lowest average cluster-internal variances as the final partition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 773, |
|
"end": 781, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Size and Distribution Sensitive K-means", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The tool CLUSTERDATASPLIT consists of three Jupyter Notebooks. Hence, using the tool requires basic Python skills. Communication between the tool and user code for machine learning models is based on .tsv files containing the text data instances and labels. Figure 2 illustrates the workflow and the separation of tasks between the tool and client code. Currently, our tool and algorithm only support sequence classification tasks with a single label per dataset instance. We leave extensions for sequence tagging tasks to future work.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 266, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "CLUSTERDATASPLIT Tool", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The first notebook provides an introduction to key NLP dataset characteristics, such as label distribu-tion, sentence token length and token frequency. It serves for a first exploration of the data and its key figures before using the data for model training and evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "DATA ANALYSIS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to generate clustering-based data splits, this notebook groups the data into a pre-defined number of groups, the so-called data folds. These data folds then serve as input in a cross-validation setting, where they are combined to build the data split in training and development data. To group the data, different K-means based algorithms, which are outlined in detail in Section 3, are available. Moreover, the tool also supports the generation of randomized partitions, which can be combined to form randomized (baseline) data splits. For generating the data splits, the user has to input the complete training data in a .tsv format and select an algorithm to generate the data folds. In most cases, s/he will want to compare the SDS K-means and randomized splitting. The tool then generates an output file with the data point IDs and the fold ID information per data point. The user then has to input this information into his/her model training setup, using a cross-validation framework in which the model is trained K times, training on K-1 folds and evaluating on the remaining fold in each iteration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CREATING DATA SPLITS", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To date, our vector representations of the input texts are mostly based on lexical information (see Section 3.1). 8 For clustering, we use the K-means implementation in the Python scikit-learn package (Pedregosa et al., 2011) , and apply the initialization method of Arthur and Vassilvitskii (2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 225, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 298, |
|
"text": "Arthur and Vassilvitskii (2007)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CREATING DATA SPLITS", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "After the training and evaluation steps based on different data splits are completed, the user can input the predictions obtained on the different evaluation sets into the third notebook using a .tsv format. The notebook calculates performance statistics and analyzes the dependence of results on data split characteristics. For example, the notebook visualizes data split characteristics such as relative size of the clusters, label distribution for the clusters and the mean sentence length. It also facilitates the comparison of different data splits and the performances obtained on these data splits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PERFORMANCE ANALYSIS", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this section, we give worked examples of using our proposed data splitting method for two sequence classification tasks, the Stanford Sentiment Treebank (SST) binary sentiment analysis task and a patent multi-class classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Worked Examples", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As our classifiers, we use simple neural networks for sequence classification based on BERT for SST (Devlin et al., 2019) and SciBERT (Beltagy et al., 2019 ) for the patent classification task. The latter model was trained on a corpus of scientific publications and is hence closer to the kind of language present in patents. For both models, we feed the CLS token into a linear layer that outputs logits corresponding to the number of classes and apply a softmax activation. For model training, we use a cross-entropy loss. We implement our models using the HuggingFace Transformers library (Wolf et al., 2019) . The maximum sequence length of word piece tokens input to the BERT model is 128 and 256 for the two tasks, respectively. We use a batch size of 8, and AdamW (Loshchilov and Hutter, 2019) with learning rates of 4e \u22126 and 4e \u22125 , respectively. Otherwise, we apply default parameters. We train the models for up to 100 epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 121, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 134, |
|
"end": 155, |
|
"text": "(Beltagy et al., 2019", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 611, |
|
"text": "(Wolf et al., 2019)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 800, |
|
"text": "(Loshchilov and Hutter, 2019)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Models", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We here give an example of using our proposed analysis for a binary sequence classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stanford Sentiment Treebank", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Dataset. The Stanford Sentiment Treebank (SST) (Socher et al., 2013) is based on movie review excerpts from the website rottentomatoes. com. For the task of binary sentiment classification, we use the SST-2 dataset, which is a variant of the original dataset containing only sentences and phrases with the label positive or negative. The dataset is slightly imbalanced with 44.28% negative and 55.72% positive labels. For our experiments, we use 61,398 sentences and phrases from the training and development part of the SST-2 dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 47, |
|
"end": 68, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stanford Sentiment Treebank", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Data splits. We perform the SST classification experiments in two settings, using our SDS Kmeans based clustered and a randomized crossvalidation setting. Figure 3 shows the visualization of the folds/clusters in two dimensions as generated by the CLUSTERDATASPLIT tool.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 163, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stanford Sentiment Treebank", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Performance results. Table 1 provides the results of training and evaluating models on clustering-based and randomized data splits in a cross-validation (CV) setting. The model summarized under the heading \"CV-1\" was trained on data folds 2-5 and evaluated on data fold 1. Note that the individual CV folds are not comparable between SDS K-means data splits (DS) and randomized DS, as each experimental setting uses different data splits. On average, the models trained and evaluated on the clustering-based data splits have a lower model performance than the models trained on the randomized data splits. Moreover, the standard deviation in model performance scores is higher for the clustering-based data splits than for the randomized data splits. Inspecting the differences between the clustered folds using CLUSTERDATASPLIT revealed that the sentences in the evaluation fold performing worst are on average shorter than the ones in the other folds, often consisting of short phrases that are difficult to classify also for human annotators. This underlines that the formation of training and development data based on the SDS K-means algorithm constitutes a more challenging evaluation environment than the random division of data into training and development data splits.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stanford Sentiment Treebank", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this section, we report results for a multi-class classification task, i.e., assigning the correct Cooperative Patent Classification (CPC) code to a patent. Dataset. We retrieve a dataset of patents from USPTO 10 and represent each patent by its title and abstract. The latter are rather short, most sequences are shorter than 300 tokens. CPC codes indicate topics or application areas of a patent, and CPC classification is actually a hierarchical multi-label multi-class classification task. For simplicity, as our goal here is to demonstrate how our evaluation methods work for a simple multi-class clas-9 https://www.uspto.gov/web/patents/ classification/cpc/html/cpc.html 10 https://www.patentsview.org/download Figure 4 : Distribution of labels in patents dataset. sification task, we filter the dataset, keeping only instances that carry just a single label at the section level (7 labels, A-H, see Table 2 ). This leaves us with 6,458 instances with a skewed label distribution as shown in Figure 4 . Of course, due the availability of machine-readable patents in large quantities, it would be possible to sample a larger training set in order to improve classification accuracy. However, our goal here is not to create an ideal CPC classifier but to highlight the importance of constructing challenging evaluation setups especially in low-resource settings. For reproducibility, we open-source the patents dataset together with our tool.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 720, |
|
"end": 728, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 909, |
|
"end": 916, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1001, |
|
"end": 1009, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Patent Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Performance results. Table 3 shows the results obtained for the various cross-validation folds when assigning documents to folds randomly or when using our clustering-based data splits. Macroavg. F1 is computed as the average over per-class F1s. For the patent classification task, differences in performance obtained in the SDS K-means evaluation setup differ even more strongly from the randomized cross-validation setup than in the case of the sentiment analysis task. Mean accuracy across the tasks is estimated as 55.8 in the randomized setting, but only as 53.3 in the SDS K-means setting. Again, as in the sentiment analysis task, the standard deviation is much higher when using clustered data folds. Two folds are notable, one exhibiting a much lower F1 and one having a much higher F1 score than the average. There are no sentence length differences in this case. Figure 5 shows that fold 3 concentrates in one region of the plot, while fold 4 has a much higher within-cluster variance. However, this does not yet explain why fold 3 instances are harder to classify, as for instance cluster 1 is also very concentrated in one region and achieves around average results. This finding hints at the fact that while our approach is able to generate challenging and diverse evaluation setups, further research is necessary to develop a systematic understanding of the observed performance differences in the produced clustering-based crossvalidation setup. A very likely reason for the result in this case, as we are comparing macro-average F1, is low performance on the rare classes in the folds which possibly do not contain many \"good\" examples of these classes in the training folds.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 21, |
|
"end": 28, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 874, |
|
"end": 882, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Patent Classification", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "In this paper, we have introduced the concept of clustering-based data splits for model evaluation of sequence classification tasks. We have outlined the steps necessary to generate clustering-based data splits and described different K-means based algorithms for the creation of clustering-based data splits. Our newly proposed SDS K-means algorithm is able to generate clusters with equal (or controllable) size and label distribution. These properties make the algorithm perfectly suited for generating clustering-based data splits for challenging cross-validation experiments. Our worked examples show that model evaluation on clusteringbased data splits generated by the SDS K-means algorithm is more challenging than model evaluation on randomly selected data splits.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The experiments conducted in this paper present a first step in exploring clustering-based data splits. Directions for future research involve the steps necessary to generate the clustering-based data splits, further exploration of the data splits and additional use cases. With regard to the generation of clustering-based data splits, different text vector representations or clustering algorithms, like for example density-based approaches, could be explored. Experiments with different text vector representations and clustering algorithms could shed some light on the impact of different cluster structures on the evaluation setup. Moreover, it would be interesting to study why clustering-based data splits seem to have a stronger effect on some datasets than on others. S\u00f8gaard et al. (2020) introduce an experiment setting to compare the predictive performance of model evaluation on different data splits with regard to test sets coming from the same domain as the training data. Applying our clustering-based data splits in this experiment setting thus could deliver important information about the predictive quality of model performance scores obtained on clustering-based data splits.", |
|
"cite_spans": [ |
|
{ |
|
"start": 777, |
|
"end": 798, |
|
"text": "S\u00f8gaard et al. (2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Currently, our method works only for classification tasks and clustering is performed mainly based on lexical information. Hence, another interesting direction for future work is extending our ideas to data splitting for sequence tagging tasks, and to integrate other types of information such as syntactic features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Outlook", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/boschresearch/ clusterdatasplit_eval4nlp-2020", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/camelop/ NLP-Robustness 3 https://allennlp.org/contrast-sets", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://gonito.net/gitlist/geval.git/ 5 https://github.com/marcotcr/checklist 6 https://github.com/uwdata/errudite", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://elki-project.github.io/ tutorial/same-size_k_means", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In future versions of the tool, other representations reflecting, e.g., syntactic or contextualized word embedding information, may be included. However, we here opt for a simple lexically-based representation for clustering that does not intend to already capture too many features that may later on be used by the models themselves. If the user of the tool wants to substitute this input embedding method, s/he can easily do so by overwriting the respective Python functions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Christian Heumann for his insightful comments regarding this work. We also thank the members of the NLP and Semantic Reasoning Group at the Bosch Center for Artificial Intelligence for their support and fruitful discussions on the ideas presented in this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "K-Means++: The advantages of careful seeding", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Arthur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergei", |
|
"middle": [], |
|
"last": "Vassilvitskii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1027--1035", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://dl.acm.org/doi/10.5555/1283383.1283494" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Arthur and Sergei Vassilvitskii. 2007. K- Means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027- 1035, New Orleans, Louisiana. Society for Industrial and Applied Mathematics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Synthetic and natural noise both break neural machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Belinkov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations, Vancouver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "SciB-ERT: A pretrained language model for scientific text", |
|
"authors": [ |
|
{ |
|
"first": "Iz", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arman", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3615--3620", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1371" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "A systematic study of the class imbalance problem in convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Mateusz", |
|
"middle": [], |
|
"last": "Buda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsuto", |
|
"middle": [], |
|
"last": "Maki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maciej A", |
|
"middle": [], |
|
"last": "Mazurowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Neural Networks", |
|
"volume": "106", |
|
"issue": "", |
|
"pages": "249--259", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1016/j.neunet.2018.07.011" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mateusz Buda, Atsuto Maki, and Maciej A Mazurowski. 2018. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1118693.1118694" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and ex- periments with perceptron algorithms. In Proceed- ings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 1-8. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "4171--4186", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1423" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Evaluating NLP models via contrast sets", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Basmova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ben", |
|
"middle": [], |
|
"last": "Bogin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sihao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pradeep", |
|
"middle": [], |
|
"last": "Dasigi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dheeru", |
|
"middle": [], |
|
"last": "Dua", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanai", |
|
"middle": [], |
|
"last": "Elazar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ananth", |
|
"middle": [], |
|
"last": "Gottumukkala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nitish", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriel", |
|
"middle": [], |
|
"last": "Ilharco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Khashabi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Computing Research Repository", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.02709" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP models via contrast sets. Computing Research Repository, arXiv:2004.02709. Version 1.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Explaining and harnessing adversarial examples", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathon", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Shlens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations, San Diego, California.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "We need to talk about standard splits", |
|
"authors": [ |
|
{ |
|
"first": "Kyle", |
|
"middle": [], |
|
"last": "Gorman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bedrick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2786--2791", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1267" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "GEval: Tool for debugging NLP datasets and models", |
|
"authors": [ |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Grali\u0144ski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Wr\u00f3blewska", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "Stanis\u0142awek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kamil", |
|
"middle": [], |
|
"last": "Grabowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomasz", |
|
"middle": [], |
|
"last": "G\u00f3recki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--262", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W19-4826" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Filip Grali\u0144ski, Anna Wr\u00f3blewska, Tomasz Sta- nis\u0142awek, Kamil Grabowski, and Tomasz G\u00f3recki. 2019. GEval: Tool for debugging NLP datasets and models. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 254-262, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Pretrained transformers improve out-of-distribution robustness", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Hendrycks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Dziedzic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawn", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2744--2751", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.244" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2744-2751, Online. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Adversarial examples for evaluating reading comprehension systems", |
|
"authors": [ |
|
{ |
|
"first": "Robin", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2021--2031", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D17-1215" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021-2031, Copenhagen, Denmark. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Survey on deep learning with class imbalance", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Justin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Taghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Khoshgoftaar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Journal of Big Data", |
|
"volume": "6", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1186/s40537-019-0192-5" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justin M Johnson and Taghi M Khoshgoftaar. 2019. Survey on deep learning with class imbalance. Jour- nal of Big Data, 6(1):27.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Least squares quantization in PCM", |
|
"authors": [ |
|
{ |
|
"first": "Stuart", |
|
"middle": [], |
|
"last": "Lloyd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "IEEE transactions on information theory", |
|
"volume": "28", |
|
"issue": "2", |
|
"pages": "129--137", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stuart Lloyd. 1982. Least squares quantization in PCM. IEEE transactions on information theory, 28(2):129-137.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Decoupled weight decay regularization", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "7th International Conference on Learning Representations, ICLR 2019", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decou- pled weight decay regularization. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119, Lake Tahoe, Nevada.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Why comparing single performance scores does not allow to draw conclusions about machine learning approaches", |
|
"authors": [ |
|
{ |
|
"first": "Nils", |
|
"middle": [], |
|
"last": "Reimers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Computing Research Repository", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1803.09578" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nils Reimers and Iryna Gurevych. 2018. Why compar- ing single performance scores does not allow to draw conclusions about machine learning approaches. Computing Research Repository, arXiv:1803.09578. Version 1.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList", |
|
"authors": [ |
|
{ |
|
"first": "Tongshuang", |
|
"middle": [], |
|
"last": "Marco Tulio Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4902--4912", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/2020.acl-main.442" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "ELKI: A large open-source library for data analysis-ELKI release 0.7.5 \"Heidelberg", |
|
"authors": [ |
|
{ |
|
"first": "Erich", |
|
"middle": [], |
|
"last": "Schubert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arthur", |
|
"middle": [], |
|
"last": "Zimek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computing Research Repository", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1902.03616" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Erich Schubert and Arthur Zimek. 2019. ELKI: A large open-source library for data analysis-ELKI release 0.7.5 \"Heidelberg\". Computing Research Reposi- tory, arXiv:1902.03616. Version 1.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Joost Bastings, and Katja Filippova. 2020. We need to talk about random splits", |
|
"authors": [ |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ebert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Computing Research Repository", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00636" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anders S\u00f8gaard, Sebastian Ebert, Joost Bastings, and Katja Filippova. 2020. We need to talk about random splits. Computing Research Repository, arXiv:2005.00636. Version 1.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Intriguing properties of neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Szegedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wojciech", |
|
"middle": [], |
|
"last": "Zaremba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bruna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dumitru", |
|
"middle": [], |
|
"last": "Erhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. In International Conference on Learning Representations, Banff, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pedro", |
|
"middle": [], |
|
"last": "Rodriguez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shi", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ikuya", |
|
"middle": [], |
|
"last": "Yamada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "387--401", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00279" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Ya- mada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adver- sarial examples for question answering. Transac- tions of the Association for Computational Linguis- tics, 7:387-401.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "VizSeq: a visual analysis toolkit for text generation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Changhan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anirudh", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danlu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiatao", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "253--258", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-3043" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Changhan Wang, Anirudh Jain, Danlu Chen, and Ji- atao Gu. 2019. VizSeq: a visual analysis toolkit for text generation tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 253-258, Hong Kong, China. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Hugging-Face's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R'emi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Computing Research Repository", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1910.03771" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Hugging- Face's transformers: State-of-the-art natural lan- guage processing. Computing Research Repository, arXiv:1910.03771. Version 5.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Errudite: Scalable, reproducible, and testable error analysis", |
|
"authors": [ |
|
{ |
|
"first": "Tongshuang", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [ |
|
"Tulio" |
|
], |
|
"last": "Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Heer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "747--763", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1073" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, repro- ducible, and testable error analysis. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 747-763, Flo- rence, Italy. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A Comprehensive Survey of Clustering Algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Dongkuan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yingjie", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Annals of Data Science", |
|
"volume": "2", |
|
"issue": "2", |
|
"pages": "165--193", |
|
"other_ids": { |
|
"DOI": [ |
|
"https://link.springer.com/article/10.1007/s40745-015-0040-1" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongkuan Xu and Yingjie Tian. 2015. A Comprehen- sive Survey of Clustering Algorithms. Annals of Data Science, 2(2):165-193.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The curse of performance instability in analysis datasets: Consequences, source, and suggestions", |
|
"authors": [ |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yixin", |
|
"middle": [], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Computing Research Repository", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.13606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiang Zhou, Yixin Nie, Hao Tan, and Mohit Bansal. 2020. The curse of performance instability in analy- sis datasets: Consequences, source, and suggestions. Computing Research Repository, arXiv:2004.13606. Version 1.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Pseudo code for SDS K-means algorithm.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "Workflow of the CLUSTERDATASPLIT tool. Grey boxes indicate client code, white boxes indicate functionalities included in Jupyter notebooks.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Visualization of data splits for patents dataset: SDS K-means clusters (left) vs. randomized (right).", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "F1 scores for binary sentiment classification on SST data. (Scores for individual folds are not comparable.)", |
|
"content": "<table><tr><td colspan=\"3\">Figure 3: Visualization of data splits for SST dataset: SDS K-means clusters (left) vs. randomized (right).</td></tr><tr><td/><td colspan=\"2\">CV-1 CV-2 CV-3 CV-4 CV-5 Mean Std</td></tr><tr><td colspan=\"2\">SDS K-means DS 94.5 89.8 95.2 93.9 92.8</td><td>93.2 1.9</td></tr><tr><td>Randomized DS</td><td>94.8 95.1 95.1 94.7 94.7</td><td>94.9 0.2</td></tr><tr><td>Table 1:</td><td/><td/></tr></table>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "CPC patent classification scheme. 9", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"text": "Macro-average F1 scores for multi-class classification on patents dataset. (Scores for individual folds are not comparable.)", |
|
"content": "<table/>", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |