ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:41.359187Z"
},
"title": "Developing a Benchmark for Reducing Data Bias in Authorship Attribution",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Murauer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Innsbruck",
"location": {}
},
"email": "[email protected]"
},
{
"first": "G\u00fcnther",
"middle": [],
"last": "Specht",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e4t Innsbruck",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Authorship attribution is the task of assigning an unknown document to an author from a set of candidates. In the past, studies in this field use various evaluation datasets to demonstrate the effectiveness of preprocessing steps, features, and models. However, only a small fraction of works use more than one dataset to prove claims. In this paper, we present a collection of highly diverse authorship attribution datasets, which better generalizes evaluation results from authorship attribution research. Furthermore, we implement a wide variety of previously used machine learning models and show that many approaches show vastly different performances when applied to different datasets. We include pre-trained language models, for the first time testing them in this field in a systematic way. Finally, we propose a set of aggregated scores to evaluate different aspects of the dataset collection.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Authorship attribution is the task of assigning an unknown document to an author from a set of candidates. In the past, studies in this field use various evaluation datasets to demonstrate the effectiveness of preprocessing steps, features, and models. However, only a small fraction of works use more than one dataset to prove claims. In this paper, we present a collection of highly diverse authorship attribution datasets, which better generalizes evaluation results from authorship attribution research. Furthermore, we implement a wide variety of previously used machine learning models and show that many approaches show vastly different performances when applied to different datasets. We include pre-trained language models, for the first time testing them in this field in a systematic way. Finally, we propose a set of aggregated scores to evaluate different aspects of the dataset collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In authorship attribution, various machine learning techniques are used to predict who has written a specific document, given a set of candidate authors. This means that a dataset used for such experiments must be well-controlled for many aspects like topic, length, etc. to ensure that the model detects the writing style of an author rather than something else like the topic of the content (Grieve, 2007; Stamatatos, 2009) . For example, a model may find it easy to detect the author if each author writes about a single specific topic, and therefore the model would detect topic rather than style. Therefore, a well-controlled dataset should cover only one topic and one genre so that the only difference between the authors can be attributed to their writing style. Consequently, this makes the results of experiments using these well-controlled datasets prone to data bias, and they become difficult to generalize.",
"cite_spans": [
{
"start": 393,
"end": 407,
"text": "(Grieve, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 408,
"end": 425,
"text": "Stamatatos, 2009)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One approach to mitigate that bias is to distinguish the style of authors from the content, which allows to use cross-topic datasets. In this subset of tasks, the documents used for training the models are deliberately different from the texts used for validation thereafter. For example, given a set of journalists that write articles in multiple sections, when news articles about politics are used for training and articles about sports written by the same authors are used for testing, the overlap of topical content can be reduced and the model can only detect stylistic features. Similarly, cross-genre datasets take this one step further and require different genres of documents for training and testing (e.g., text messages and scientific essays). These datasets reduce the amount of stylistic information that can be used for each author to a subset that is expressive in both genres. In cross-language datasets, the training and testing data are written in different languages, further reducing this overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even when using cross-domain (topic, genre, etc.) datasets, the difficulty of generalizing assumptions regarding the authors' writing style remains, as any conclusions can only be stated for the concrete authors in that dataset. This may be sufficient for some applications, in which the style of specific authors or well-controlled groups of authors is analyzed. However, statements that claim to hold up more generally require evaluation with multiple and diverse datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a collection of datasets for authorship attribution which cover a wide variety of aspects, fulfilling these needs. We include datasets with few and many candidate authors, with different numbers of documents per author, with differently sized documents, and cross-topic, crossgenre and cross-language datasets. We provide detailed suggestions on train/test splits and perform evaluation experiments with a wide variety of models to demonstrate how much the choice of the dataset can impact classification results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We provide exemplary attribution results for all datasets for a wide variety of machine learning models. We specifically include several pre-trained language models, as they have shown great success in different NLP fields over the last years, but a systematic analysis of their performance in the authorship attribution field has not yet been performed to the best of our knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lastly, to evaluate how well a model performs for each aspect of the collection, we provide a set of aggregated scores that combines results from different datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions in this paper are therefore threefold: (1) we present a collection of selected, highly diverse datasets for authorship attribution that are able to better generalize evaluation results, (2) we benchmark several pre-trained language models on these datasets, providing a previously unavailable baseline in the field of authorship attribution, and (3) we provide a set of scores that evaluate a model based on the different aspects of the datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To ensure reproducibility and foster future research, we publish all code online 1 . Thereby, we focus on providing tooling that minimizes the efforts required to expand both the dataset collection as well as the evaluation scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While many previous studies use either only one dataset or don't specifically increase the diversity of the datasets used, they often fail to address this implicit data bias. Even foundational work in this field trying to categorize features in this field in a fundamental way can be prone to this issue. For example, Grieve (2007) measure the effectiveness of 39 different feature types for attribution. They address the importance of the dataset being representative for a language and explicitly explain the characteristics of the texts and authors in great detail, but consequently, by using a single dataset, their findings of feature performances are restricted to those very characteristics. Nevertheless, findings of such fundamental work are often referenced for research that uses completely different datasets.",
"cite_spans": [
{
"start": 318,
"end": 331,
"text": "Grieve (2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "One idea to mitigate any bias on the content of a dataset is to focus on the separation between style and content. This can be achieved by explicitly modelling the topic (Sari et al., 2018) or by using cross-topic or cross-domain datasets, where the training data and the test data have a different genre or contain texts about different topics (Stamatatos, 2013; Sapkota et al., 2015; Kestemont et al., 2018) . For the latter, the key idea is that by minimizing the topic or genre-specific content contained in the overlap of training and testing data, any performances measured must conclude from the stylistic information from the authors. Nevertheless, for both approaches, the bias towards those authors remains in the evaluation.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Sari et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 345,
"end": 363,
"text": "(Stamatatos, 2013;",
"ref_id": "BIBREF24"
},
{
"start": 364,
"end": 385,
"text": "Sapkota et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 386,
"end": 409,
"text": "Kestemont et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Even from within a dataset, the choice of training and testing data can have a large impact on the outcome and additionally varies across languages (Eder and Rybicki, 2012) . Additionally, Eder (2013) demonstrated that the amount of text required to reliably attribute an author also depends on the language, and suspects that this result may be depending on the genre of text as well. Similarly, Luyckx and Daelemans (2011) show that while some feature types are more robust to the size of the dataset, the performance of others varies greatly depending on the number of documents per author and the number of authors.",
"cite_spans": [
{
"start": 148,
"end": 172,
"text": "(Eder and Rybicki, 2012)",
"ref_id": "BIBREF4"
},
{
"start": 189,
"end": 200,
"text": "Eder (2013)",
"ref_id": "BIBREF3"
},
{
"start": 397,
"end": 424,
"text": "Luyckx and Daelemans (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this paper, we want to showcase a collection of diverse authorship attribution datasets and perform attribution experiments with several widely used exemplary machine learning models. The higher goal of our work is that it should be easy to make evaluation results of authorship attribution research easily comparable and also generalizable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Using pre-trained language models for authorship attribution has not been researched in great detail. Some approaches use them as part of a larger ensemble model (Fabien et al., 2020) or as a feature extraction step in front of the actual classifier (Barlas and Stamatatos, 2020) . However, when it comes to the performance of the unaltered models that are readily available, no overview for comparisons using widely used authorship attribution datasets are available to the best of our knowledge.",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Fabien et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 250,
"end": 279,
"text": "(Barlas and Stamatatos, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For this benchmark, we have selected a wide variety of authorship attribution corpora. Thereby, we focussed on multiple aspects of datasets that may influence the classification process and try to provide a diverse but controlled set of these aspects:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "\u2022 Genres: social media comments, business news, novels, reviews, etc. These lead to the selection of seven datasets, which will be described briefly in the following section. An overview of some basic statistics is presented in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The CCAT50 dataset (Liu, 2011) is a subset of the Reuters Corpus Volume 1 (Lewis et al., 2004) and contains 5,000 financial news articles from 50 authors, each having 50 training documents and 50 testing documents.",
"cite_spans": [
{
"start": 19,
"end": 30,
"text": "(Liu, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The CL-Novels dataset (Bogdanova and Lazaridou, 2014) contains English novels by 19th-century authors (Jane Austen, Charlotte Bront\u00eb, Lewis Carroll, Rudyard Kipling, Robert Louis Stevenson, and Oscar Wilde) and some Spanish (human) translations of their works. Although the novels are split into 500 sentence chunks (as the original authors did), these chunks are still the largest documents in this benchmark (cf. Table 1 ).",
"cite_spans": [
{
"start": 22,
"end": 53,
"text": "(Bogdanova and Lazaridou, 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The CMCC dataset (Goldstein-Stewart et al., 2008) contains texts from 21 students about 6 different topics (church, gay marriage, privacy rights, legalization of marijuana, war in Iraq, gender discrimination) in 6 different genres (email, essay, interview transcript, blog article, chat, discussion transcript). This means that depending on how the data is split into train and test parts, it can function as either cross-topic or cross-genre dataset.",
"cite_spans": [
{
"start": 17,
"end": 49,
"text": "(Goldstein-Stewart et al., 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The Guardian dataset (Stamatatos, 2013) consists of book reviews and opinion articles written by professional journalists of The Guardian newspaper. The documents are categorized into the two genres of book reviews and opinion articles, and the latter is further divided into four topics (politics, society, world, UK). Hence, similar to the CMCC dataset, the choice of the train/test split defines whether this dataset is cross-topic or cross-genre.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "(Stamatatos, 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The IMDb62 dataset (Seroussi et al., 2010) contains movie reviews written by 62 users of the internet movie database platform 2 . It features by far the most documents per author (1,000).",
"cite_spans": [
{
"start": 19,
"end": 42,
"text": "(Seroussi et al., 2010)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The PAN18-FF dataset (Kestemont et al., 2018) consists of fan fiction prose texts written by admirers of authors, novels, TV shows, movies, etc. Thereby, the authors invent and create new stories surrounding the original universes, which are called fandoms. The dataset contains authors that have written fiction in multiple fandoms, making it a cross-domain. Furthermore, the dataset is divided into 10 explicit sub-problems, 2 for each of 5 languages (English, Spanish, French, Italian, and Polish). For each problem, training and testing documents are predefined. Note that these problems are single-language problems and the authors don't overlap across different problems, which means that this is a multilingual, but not a cross-lingual dataset.",
"cite_spans": [
{
"start": 21,
"end": 45,
"text": "(Kestemont et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "The Reddit dataset (Murauer and Specht, 2019) consists of comments by multilingual users of the Reddit social media platform. It contains five different language pairs for which users have written comments in both languages (English as well as one of German, Spanish, Portuguese, Dutch, and French). Compared to the other datasets, it is the Dataset Splits Description CCAT50 2 predefined (50%/50%) CL-Novels * 15 leave-one-novel out CMCC\u00d7 G 6 leave-one-genre-out CMCC\u00d7 T 6 leave-one-topic-out Guardian\u00d7 G 2 leave-one-genre-out Guardian\u00d7 T 4 leave-one-topic-out IMDb62 5 stratified 5-fold PAN18-FF 10 predefined Reddit 10 leave-one-language-out most unbalanced dataset, as for some authors far more documents are available than for others (cf. column 'Imb' in Table 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 760,
"end": 767,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selected Datasets",
"sec_num": "3.1"
},
{
"text": "Deciding which parts of a dataset are used for training and testing plays an important role in interpreting the evaluation results, and being able to replicate results. In this section, we explain how these splits are selected for each dataset. Table 2 contains the overview of the train/test splits used in this paper. The CCAT50 dataset has predefined subsets for training and testing of equal size.",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 252,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "The CL-Novels dataset is evaluated using leaveone-novel-out, as suggested by the original authors: Let D be the set of all novels, l n the language of novel n, and t n the original English title of both Spanish and English versions of n. Then, for \u2200n \u2208 D, the model is trained with all novels m = {m \u2208 D|l m = l n \u2227 t m = t n }. The model is then evaluated on n. For example, for the split that has the English version of Alice in Wonderland as test data, m consists of training documents that (1) are not English, and (2) are not (a translated version of) Alice in Wonderland. Consequently, all n that only appear in one language have the same training documents m. For these splits, the same model has to be trained only once and can be used for evaluation for all of the splits, increasing efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "The CMCC and Guardian datasets contain multiple topics and genres. We adopt both a leave-onegenre-out as well as a leave-one-topic-out strategy, which is in line with related studies using these resources. Hence, in the experiments, these datasets are listed twice: once as a cross-genre dataset, and once as a cross-topic dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "For the homogeneous IMDb62 dataset, we use a stratified 5-fold cross-validation scheme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "The PAN18-FF dataset is divided into 10 subproblems, each with a predefined training and testing part.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "Finally, for the Reddit dataset, we use leaveone-language-out splits, where all documents of language l 1 are used for testing, and all documents of the respective other language l 2 are used for training. This is repeated for l 1 and l 2 swapped, and for each language pair (sub-problem) in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Splits",
"sec_num": "3.2"
},
{
"text": "For the IMDb62, PAN18-FF, CMCC, and Guardian datasets, permission to use them is required from the original authors. The CCAT50 3 and Reddit 4 datasets are freely available online. We reconstructed the CL-Novels dataset from the information of the original paper (Bogdanova and Lazaridou, 2014) by downloading the appropriate novels from the Project Gutenberg 5 . We removed introduction texts by the hosting platform (which always include the name of the author) and any appendices and notes from translators. The novels are in the public domain and we make the resulting cleaned dataset available for download online 6 .",
"cite_spans": [
{
"start": 263,
"end": 294,
"text": "(Bogdanova and Lazaridou, 2014)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Availability",
"sec_num": "3.3"
},
{
"text": "The purpose of the evaluation experiments in this paper is to show that the performance of different models varies greatly across different datasets. We therefore perform classification experiments with several classification models to provide an impression of how the choice of a dataset influences the evaluation, but don't claim to provide the best possible configurations of those models for the analyzed datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup and Models",
"sec_num": "4"
},
{
"text": "We select several features in combination with a linear support vector machine, as well as several solutions based on pre-trained language models. Important parameters for the models are listed in Table 3 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiment Setup and Models",
"sec_num": "4"
},
{
"text": "As a simple baseline, we use tf/idf-normalized frequencies of character 3-grams. They have been shown to be effective in authorship attribution and are capable of capturing both content-related information as well as author-specific stylistic nuances (Sapkota et al., 2015; Stamatatos, 2013 Stamatatos, , 2017 . We further adopt two syntax-based features. Firstly, we use part-of-speech (POS) tag n-grams. These abstract from the content of the text and focus on the grammatical structure, which has been shown to identify authors in similar settings (Kaster et al., 2005; Bogdanova and Lazaridou, 2014) . Secondly, we utilize the DT-grams feature by Murauer and Specht (2021) , which uses POS tags, but additionally incorporates dependency grammar information. Universal POS tags (Nivre et al., 2016) are a mapping of language-specific tags into a universal, language-independent space, and we utilize them for the experiments on the cross-language datasets. For both syntax-based features, we use language-specific POS tags for the mono-language datasets, and universal POS tags for the crosslanguage datasets. Document embeddings have been shown to be effective for authorship attribution (G\u00f3mez-Adorno et al., 2018), and we experiment with both character 3-grams and words as tokens.",
"cite_spans": [
{
"start": 251,
"end": 273,
"text": "(Sapkota et al., 2015;",
"ref_id": "BIBREF20"
},
{
"start": 274,
"end": 290,
"text": "Stamatatos, 2013",
"ref_id": "BIBREF24"
},
{
"start": 291,
"end": 309,
"text": "Stamatatos, , 2017",
"ref_id": "BIBREF25"
},
{
"start": 551,
"end": 572,
"text": "(Kaster et al., 2005;",
"ref_id": "BIBREF9"
},
{
"start": 573,
"end": 603,
"text": "Bogdanova and Lazaridou, 2014)",
"ref_id": "BIBREF1"
},
{
"start": 651,
"end": 676,
"text": "Murauer and Specht (2021)",
"ref_id": "BIBREF16"
},
{
"start": 781,
"end": 801,
"text": "(Nivre et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features with Linear SVM",
"sec_num": "4.1"
},
{
"text": "We test three transformer-based models: BERT (Devlin et al., 2018) , DistilBERT (Sanh et al., 2019) , and RoBERTa (Liu et al., 2019) . Previous works use this family of models in combination with an ensemble (Fabien et al., 2020) or as a feature extraction stage for further processing (Barlas and Stamatatos, 2020) , but no comprehensive analysis has been performed which uses these models without further modifications to the best of our knowledge. We use the parameters suggested by the respective original authors and use a sequence length of 256 tokens. As many documents are longer than that, we use a sliding window approach that extracts samples from the documents that fit into the maximum sequence length of the models.",
"cite_spans": [
{
"start": 45,
"end": 66,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 80,
"end": 99,
"text": "(Sanh et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 114,
"end": 132,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 208,
"end": 229,
"text": "(Fabien et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 286,
"end": 315,
"text": "(Barlas and Stamatatos, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Trained Language Models",
"sec_num": "4.2"
},
{
"text": "For all models presented in Section 4, we perform authorship attribution experiments for all datasets covered in Section 3 (using the train/test splits as discussed in Section 3.2). In Table 4 , the results of these classifications is shown, where each score (measured in macro-averaged F1) represents the mean score of the respective model and dataset for all train/test splits for that dataset. For example, the PAN18-FF dataset has 10 explicit subproblems, so each score in the PAN18 column of Table 4 represents the average score of the model for those 10 problems. As described in Section 3.2, the CMCC and Guardian datasets have two different ways of splitting the data (cross-topic and cross-genre). From these exemplary experiments, different conclusions can be drawn depending on which subsets of the results are analyzed. In the remainder of this section, we focus on the different aspects that the selected datasets feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 497,
"end": 504,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5"
},
{
"text": "The IMDb62 and CCAT50 datasets are large enough to extract differently sized subsets to analyze the performance of the models. While the Reddit dataset has comparably many documents, we refrained from including it in this experiment due to the imbalance of the dataset. Therefore, we restricted the number of training documents to 5, 10, 15, 20, 30, 40, and 50 documents for each Figure 2 : Sensitivity of tested models to cross-genre (\u00d7 G ) and cross-topic (\u00d7 T ) splits. The y-axis shows the standard deviation of the F1 score for all splits, high values indicate that the model performed well on some topics/genres and bad on others.",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 388,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sensitivity to Dataset Size",
"sec_num": "5.1"
},
{
"text": "Model CCAT50 CL-Novels CMCC \u00d7 T CMCC \u00d7 G Guardian \u00d7 T Guardian \u00d7 G IMDb62 PAN18",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity to Dataset Size",
"sec_num": "5.1"
},
{
"text": "author, while not changing the number of test documents or the number of authors. The sampling of the documents was random, and all experiments were repeated 5 times to mitigate bias. In Figure 1 , the F1 score of these sizes is displayed. It can be seen that while the performance for all models rises gradually in the CCAT50 dataset, the transformerbased models have much more trouble with the IMDb62 dataset when provided with fewer training samples, but quickly catch up to the character 3-grams with more training data. While the reason for this discrepancy remains unanswered by this experiment, it shows that the two datasets exhibit different behaviors when used in combination with transformer-based classifiers. It is likely that other small-scaled datasets also display such incoherences, and it is therefore important to apply any model to multiple datasets to increase the meaningfulness of the evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 195,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sensitivity to Dataset Size",
"sec_num": "5.1"
},
{
"text": "In particular, studies along the lines of Luyckx and Daelemans (2011) analyzing comparable problems with varying dataset sizes should also be performed on as many datasets as possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity to Dataset Size",
"sec_num": "5.1"
},
{
"text": "From Table 4 , several conclusions can be drawn from the results of the cross-topic/genre datasets. Firstly, we can confirm that cross-genre classification is in general harder than cross-topic classification. Where explicit previous assumptions in this regard use single models and datasets (Stamatatos, 2013; Barlas and Stamatatos, 2020) , we affirm this finding with multiple models and datasets. As a single exception, the character 3-grams show a higher performance on the cross-genre version of the CMCC dataset compared to the cross-topic variant.",
"cite_spans": [
{
"start": 311,
"end": 339,
"text": "Barlas and Stamatatos, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Sensitivity to Genre and Topic",
"sec_num": "5.2"
},
{
"text": "The table also clearly reflects the difficulty that cross-genre situations impose on the pre-trained language models, which otherwise excel in the cross-topic splits. Figure 2 shows the standard deviation of the F1 score across the different topics and genres in the CMCC and Guardian datasets. Hence, high values mean that the models perform differently for the topics or genres in the dataset. The figure displays that most models are more sensitive to the genre of the text than they are to the topic, consistently over both CMCC and Guardian datasets. The Doc2Vec model with character 3-grams has a low overall prediction score (cf . Table 4) , and shows this effect to a smaller degree.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 2",
"ref_id": null
},
{
"start": 636,
"end": 646,
"text": ". Table 4)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Sensitivity to Genre and Topic",
"sec_num": "5.2"
},
{
"text": "Note that this does not hold for the average performance over all splits (cf . Table 4) : in general, the tested models are performing better on the cross-topic datasets, and do so more consistently for all topics compared to the cross-genre datasets. This result can't be seen from Table 4 , and it means that for some cross-genre splits, some models may perform better than the average winner.",
"cite_spans": [],
"ref_spans": [
{
"start": 77,
"end": 87,
"text": ". Table 4)",
"ref_id": "TABREF5"
},
{
"start": 283,
"end": 290,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Sensitivity to Genre and Topic",
"sec_num": "5.2"
},
{
"text": "A surprising overall result for the cross-language datasets in Table 4 is the relatively high efficiency of the pre-trained language models for the Reddit dataset, as they have not been pre-trained using multilingual texts. This performance is not displayed in the other cross-language dataset containing 19th-century novels, which suggest that his behavior could stem from the genre of texts (social media comments), which are more likely to contain words common in multiple languages than documents from the 19th century. However, we suggest that even more datasets are required to answer this specific question.",
"cite_spans": [],
"ref_spans": [
{
"start": 63,
"end": 70,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Sensitivity to Language",
"sec_num": "5.3"
},
{
"text": "Cross-language classification problems are defined by two different choices regarding the candidate languages: Firstly, which languages are considered in the classification problem at all, and secondly, which of those languages are used for training and which are used for testing. de \u2192 en en \u2192 de es \u2192 en en \u2192 es fr \u2192 en en \u2192 fr nl \u2192 en en \u2192 nl pt \u2192 en en \u2192 pt Figure 3 : Sensitivity of selected models to the direction of the train/test split for each language pair. de \u2192 en denotes the score of the model that was trained with German and tested with English documents.",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 370,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sensitivity to Language",
"sec_num": "5.3"
},
{
"text": "shows the macro F1 score of two cross-language models (univ. POS tag 3-grams and DT-grams) and the pre-trained language models for the Reddit dataset. The different colors represent the different language pairs of the Reddit dataset, and the two columns of each color represent the classification score (in macro F1) of both train/test directions used for the experiment (thereby, de \u2192 en denotes that the model was trained using German documents and tested on English texts). The performance of the models generally differs across different pairs, which suggests that any cross-language classification approach should use as many language pairs as possible to generalize well. However, cross-language datasets are difficult to compile, as authors writing in more than one language are sparse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity to Language",
"sec_num": "5.3"
},
{
"text": "In general, but especially for the language models, the figure also displays that the models perform better when they are fine-tuned using the non-English documents. This suggests that the choice of which language is used for training is an important choice that must be considered and reported by cross-language attribution studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sensitivity to Language",
"sec_num": "5.3"
},
{
"text": "The various datasets allow aggregation of the results according to the different aspects described in the previous sections, for which we formulate scores that are listed in Table 5 . Each score is calculated by averaging the results from all splits in the respective datasets, weighted by the inverse number of splits in each dataset. Table 6 shows these scores for all models tested in our experiments, while Table 7 shows the standard deviation of each model across the different splits of the respective score. The aim of this separation is to quickly provide an Score Description Datasets Used mono one lang./topic/genre IMDb62, CCAT50 sm 10 training texts/auth. IMDb62, CCAT50 ml mixed languages",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 336,
"end": 343,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Scores",
"sec_num": "6"
},
{
"text": "PAN18 \u00d7 T cross-topic CMCC\u00d7 T , Guardian\u00d7 T \u00d7 G cross-genre CMCC\u00d7 G , Guardian\u00d7 G \u00d7 L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scores",
"sec_num": "6"
},
{
"text": "cross-language Reddit, CL-Novels avg mean of all overview of the strengths and weaknesses that a model shows for specifics aspects of the datasets. For example, for the models presented in this paper, it is now more clearly detectable that the character 3-gram features are a very strong baseline, but fail at the cross-language tasks. The pre-trained language models show promising results for authorship attribution in summary, especially in the unexpected case of cross-language classification. Higher standard deviations indicate that these models are more prone to overfitting to specific fits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scores",
"sec_num": "6"
},
{
"text": "We want to emphasize once more that the aim of this paper is not to provide the best possible results for the tested models, but show how a more expressive evaluation result can be achieved by incorporating multiple datasets into the evaluation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scores",
"sec_num": "6"
},
{
"text": "The collection of datasets presented in this paper is by no means exhaustive in terms of covered dataset aspects, but it should provide a solid foundation for this purpose. For example, authorship attribution on a larger scale (Narayanan et al., gnall et al., 2019) requires datasets far beyond the sizes of the presented material, and in general, also requires different methods for evaluation and solving strategies. From a multilingual standpoint, the dataset collection thus far only contains several European languages, and those are among the smallest datasets in the benchmark. In the long term, our future plans involve including more datasets from as many languages as possible, and ideally also increase the number of cross-language datasets.",
"cite_spans": [
{
"start": 227,
"end": 245,
"text": "(Narayanan et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "7"
},
{
"text": "As not all models and methods are intended to work with all types of text, we envision welldefined subsets of the benchmark covering the possible application areas for many models. For example, even when a model is only targeted to classify social media text, we aim to provide multiple datasets fulfilling this requirement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "7"
},
{
"text": "To ensure the continued attribution to this collection, we publish a set of tools 7 which minimize the effort required to add additional datasets to this collection. These tools make it easy to (1) bring the dataset to a common format, (2) define train/test splits that the dataset should be used with, and (3) specify which of these splits contribute to the 7 https://git.uibk.ac.at/csak8736/ authbench scores presented in the previous section. Thereby, contributions can be made to existing scores by providing more datasets to reassure them, or add additional scores to the collection. We hope to timely contribute more multilingual datasets to the ml score and expand on different dataset sizes beyond the few thresholds presented by the sm score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations and Future Work",
"sec_num": "7"
},
{
"text": "In this paper, we present a collection of datasets aimed to increase the expressiveness and generalizability of authorship attribution experiments. The datasets are carefully chosen to include many different aspects of the text, such as document size, number of documents per author, language, genre, or topic. We choose several well-established text classification models and perform attribution experiments on all datasets, for the first time showing results systematically for pre-trained language models in this field. Thereby, we demonstrate the importance of including multiple datasets in any evaluation by showing differences in the classification score for similar datasets and train/test splits. We conclude the paper by suggesting an aggregated score for each of the presented aspects to easily distinguish the strengths and weaknesses of different models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "https://git.uibk.ac.at/csak8736/ authbench",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.imdb.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://archive.ics.uci.edu/ml/ datasets/Reuter_50_50 4 https://github.com/bmurauer/reddit_ corpora 5 https://www.gutenberg.org/ 6 https://git.uibk.ac.at/csak8736/ authbench",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Crossdomain authorship attribution using pre-trained language models",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Barlas",
"suffix": ""
},
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2020,
"venue": "IFIP Advances in Information and Communication Technology",
"volume": "",
"issue": "",
"pages": "255--266",
"other_ids": {
"DOI": [
"10.1007/978-3-030-49161-1_22"
]
},
"num": null,
"urls": [],
"raw_text": "Georgios Barlas and Efstathios Stamatatos. 2020. Cross- domain authorship attribution using pre-trained lan- guage models. In IFIP Advances in Information and Communication Technology, pages 255-266.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Crosslanguage authorship attribution",
"authors": [
{
"first": "Dasha",
"middle": [],
"last": "Bogdanova",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
}
],
"year": 2014,
"venue": "Ninth International Conference on Language Resources and Evaluation (LREC'2014)",
"volume": "",
"issue": "",
"pages": "2015--2020",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dasha Bogdanova and Angeliki Lazaridou. 2014. Cross- language authorship attribution. In Ninth Interna- tional Conference on Language Resources and Eval- uation (LREC'2014), pages 2015-2020.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Does size matter? Authorship attribution, small samples, big problem. Digital Scholarship in the Humanities",
"authors": [
{
"first": "Maciej",
"middle": [],
"last": "Eder",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "30",
"issue": "",
"pages": "167--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maciej Eder. 2013. Does size matter? Authorship attri- bution, small samples, big problem. Digital Scholar- ship in the Humanities, 30(2):167-182.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Do birds of a feather really flock together, or how to choose training samples for authorship attribution",
"authors": [
{
"first": "Maciej",
"middle": [],
"last": "Eder",
"suffix": ""
}
],
"year": 2012,
"venue": "Literary and Linguistic Computing",
"volume": "28",
"issue": "2",
"pages": "229--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maciej Eder and Jan Rybicki. 2012. Do birds of a feather really flock together, or how to choose train- ing samples for authorship attribution. Literary and Linguistic Computing, 28(2):229-236.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BertAA: BERT fine-tuning for Authorship Attribution",
"authors": [
{
"first": "Ma\u00ebl",
"middle": [],
"last": "Fabien",
"suffix": ""
},
{
"first": "Esa\u00fa",
"middle": [],
"last": "Villatoro-Tello",
"suffix": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th International Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma\u00ebl Fabien, Esa\u00fa Villatoro-Tello, Petr Motlicek, and Shantipriya Parida. 2020. BertAA: BERT fine-tuning for Authorship Attribution. In Proceedings of the 17th International Conference on Natural Language Processing.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating and using a correlated corpus to glean communicative commonalities",
"authors": [
{
"first": "Jade",
"middle": [],
"last": "Goldstein-Stewart",
"suffix": ""
},
{
"first": "Kerri",
"middle": [],
"last": "Goodwin",
"suffix": ""
},
{
"first": "Roberta",
"middle": [],
"last": "Sabin",
"suffix": ""
},
{
"first": "Ransom",
"middle": [],
"last": "Winder",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jade Goldstein-Stewart, Kerri Goodwin, Roberta Sabin, and Ransom Winder. 2008. Creating and using a correlated corpus to glean communicative common- alities. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). European Language Resources Associa- tion (ELRA).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Document embeddings learned on various types of n-grams for cross-topic authorship attribution",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "G\u00f3mez-Adorno",
"suffix": ""
},
{
"first": "Juan-Pablo",
"middle": [],
"last": "Posadas-Dur\u00e1n",
"suffix": ""
},
{
"first": "Grigori",
"middle": [],
"last": "Sidorov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Pinto",
"suffix": ""
}
],
"year": 2018,
"venue": "Computing",
"volume": "100",
"issue": "7",
"pages": "741--756",
"other_ids": {
"DOI": [
"10.1007/s00607-018-0587-8"
]
},
"num": null,
"urls": [],
"raw_text": "Helena G\u00f3mez-Adorno, Juan-Pablo Posadas-Dur\u00e1n, Grigori Sidorov, and David Pinto. 2018. Document embeddings learned on various types of n-grams for cross-topic authorship attribution. Computing, 100(7):741-756.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Quantitative authorship attribution: An evaluation of techniques",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Grieve",
"suffix": ""
}
],
"year": 2007,
"venue": "Literary and Linguistic Computing",
"volume": "22",
"issue": "",
"pages": "251--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Grieve. 2007. Quantitative authorship attribution: An evaluation of techniques. Literary and Linguistic Computing, 22:251-270.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Combining Text and Linguistic Document Representations for Authorship Attribution",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Kaster",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Siersdorfer",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2005,
"venue": "Working Notes of the 28th Conference on Research and Development in Information Retrieval (SIGIR'2005): Stylistic Analysis of Text for Information Access",
"volume": "",
"issue": "",
"pages": "27--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Kaster, Stefan Siersdorfer, and Gerhard Weikum. 2005. Combining Text and Linguistic Doc- ument Representations for Authorship Attribution. In Working Notes of the 28th Conference on Re- search and Development in Information Retrieval (SIGIR'2005): Stylistic Analysis of Text for Informa- tion Access, pages 27-35.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overview of the Author Identification Task at PAN-2018: Crossdomain Authorship Attribution and Style Change Detection",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Kestemont",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Tschugnall",
"suffix": ""
},
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "G\u00fcnther",
"middle": [],
"last": "Specht",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
}
],
"year": 2018,
"venue": "Working Notes Papers of the CLEF 2018 Evaluation Labs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.3737849"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Kestemont, Michael Tschugnall, Efstathios Sta- matatos, Walter Daelemans, G\u00fcnther Specht, Benno Stein, and Martin Potthast. 2018. Overview of the Author Identification Task at PAN-2018: Cross- domain Authorship Attribution and Style Change Detection. In Working Notes Papers of the CLEF 2018 Evaluation Labs.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rcv1: A new benchmark collection for text categorization research",
"authors": [
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Russell-Rose",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of machine learning research",
"volume": "5",
"issue": "",
"pages": "361--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research, 5(Apr):361-397.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reuter 50-50 Dataset",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Liu. 2011. Reuter 50-50 Dataset. National Engi- neering Research Center for E-Learning Technology China.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The effect of author set size and data size in authorship attribution",
"authors": [
{
"first": "Kim",
"middle": [],
"last": "Luyckx",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2011,
"venue": "Literary and Linguistic Computing",
"volume": "26",
"issue": "1",
"pages": "35--55",
"other_ids": {
"DOI": [
"10.1093/llc/fqq013"
]
},
"num": null,
"urls": [],
"raw_text": "Kim Luyckx and Walter Daelemans. 2011. The effect of author set size and data size in authorship attribution. Literary and Linguistic Computing, 26(1):35-55.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generating cross-domain text classification corpora from social media comments",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Murauer",
"suffix": ""
},
{
"first": "G\u00fcnther",
"middle": [],
"last": "Specht",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 20th International Conference of the Cross-Language Evaluation Forum for European Languages (CLEF'2019)",
"volume": "",
"issue": "",
"pages": "114--125",
"other_ids": {
"DOI": [
"10.1007/978-3-030-28577-7_7"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Murauer and G\u00fcnther Specht. 2019. Generat- ing cross-domain text classification corpora from so- cial media comments. In Proceedings of the 20th In- ternational Conference of the Cross-Language Evalu- ation Forum for European Languages (CLEF'2019), pages 114-125.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "DTgrams: Structured Dependency Grammar Stylometry for Cross-Language Authorship Attribution",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Murauer",
"suffix": ""
},
{
"first": "G\u00fcnther",
"middle": [],
"last": "Specht",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Murauer and G\u00fcnther Specht. 2021. DT- grams: Structured Dependency Grammar Stylometry for Cross-Language Authorship Attribution.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On the feasibility of internet-scale author identification",
"authors": [
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Paskov",
"suffix": ""
},
{
"first": "Neil",
"middle": [
"Zhenqiang"
],
"last": "Gong",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bethencourt",
"suffix": ""
},
{
"first": "Emil",
"middle": [],
"last": "Stefanov",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE Symposium on Security and Privacy",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/sp.2012.46"
]
},
"num": null,
"urls": [],
"raw_text": "Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the feasibility of internet-scale author identification. In 2012 IEEE Symposium on Security and Privacy. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal dependencies v1: A multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silveira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th Int. Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1659--1666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajic, Christopher D Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal dependen- cies v1: A multilingual treebank collection. In Pro- ceedings of the 10th Int. Conference on Language Resources and Evaluation (LREC'16), pages 1659- 1666.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Not All Character N-grams Are Created Equal: A Study In Authorship Attribution",
"authors": [
{
"first": "Upendra",
"middle": [],
"last": "Sapkota",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bethard",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Montes",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human language Technologies",
"volume": "",
"issue": "",
"pages": "93--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Upendra Sapkota, Steven Bethard, Manuel Montes, and Thamar Solorio. 2015. Not All Character N-grams Are Created Equal: A Study In Authorship Attri- bution. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human language Tech- nologies, pages 93-102.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Topic or style? exploring the most useful features for authorship attribution",
"authors": [
{
"first": "Yunita",
"middle": [],
"last": "Sari",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "343--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunita Sari, Mark Stevenson, and Andreas Vlachos. 2018. Topic or style? exploring the most useful features for authorship attribution. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 343-353. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Collaborative Inference of Sentiments from Texts",
"authors": [
{
"first": "Yanir",
"middle": [],
"last": "Seroussi",
"suffix": ""
},
{
"first": "Ingrid",
"middle": [],
"last": "Zukerman",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Bohnert",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 18th International Conference on User Modeling, Adaptation and Personalization (UMOD'2010)",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {
"DOI": [
"10.1007/978-3-642-13470-8_19"
]
},
"num": null,
"urls": [],
"raw_text": "Yanir Seroussi, Ingrid Zukerman, and Fabian Bohnert. 2010. Collaborative Inference of Sentiments from Texts. In Proceedings of the 18th International Con- ference on User Modeling, Adaptation and Personal- ization (UMOD'2010), pages 195-206.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Survey of Modern Authorship Attribution Methods",
"authors": [
{
"first": "E",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "60",
"issue": "3",
"pages": "538--556",
"other_ids": {
"DOI": [
"10.1002/asi.v60:3"
]
},
"num": null,
"urls": [],
"raw_text": "E. Stamatatos. 2009. A Survey of Modern Author- ship Attribution Methods. Journal of the Ameri- can Society for Information Science and Technology, 60(3):538-556.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "On the Robustness of Authorship Attribution Based on Character N-Gram Features",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Law & Policy",
"volume": "",
"issue": "",
"pages": "421--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2013. On the Robustness of Authorship Attribution Based on Character N-Gram Features. Journal of Law & Policy, pages 421-439.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Authorship Attribution Using Text Distortion",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL'2017)",
"volume": "",
"issue": "",
"pages": "1138--1149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2017. Authorship Attribution Using Text Distortion. In Proceedings of the 15th Conference of the European Chapter of the Associ- ation for Computational Linguistics (EACL'2017), pages 1138-1149. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Reduce & attribute: Two-step authorship attribution for large-scale problems",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Tschuggnall",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Murauer",
"suffix": ""
},
{
"first": "G\u00fcnther",
"middle": [],
"last": "Specht",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "951--960",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1089"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Tschuggnall, Benjamin Murauer, and G\u00fcnther Specht. 2019. Reduce & attribute: Two-step author- ship attribution for large-scale problems. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 951-960. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 3",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Dataset</td><td>Text type</td><td>\u00d7 T \u00d7 G \u00d7 L</td><td>A</td><td>D</td><td>Words</td><td colspan=\"2\">|d| D/A Imb</td></tr><tr><td>CCAT50</td><td>financial/industrial news</td><td/><td>50</td><td colspan=\"2\">2,500 1,254.4K</td><td>502</td><td>50 0.0</td></tr><tr><td colspan=\"2\">CL-Novels prose</td><td/><td>6</td><td colspan=\"3\">144 1,199.1K 8,354</td><td>24 11.7</td></tr><tr><td>CMCC</td><td>multiple</td><td/><td>21</td><td>630</td><td>378.4K</td><td>601</td><td>30 0.0</td></tr><tr><td>Guardian</td><td>book reviews, opinions</td><td/><td>13</td><td>264</td><td colspan=\"2\">276.0K 1,043</td><td>20 4.3</td></tr><tr><td>IMDb62</td><td>movie reviews</td><td/><td>62</td><td colspan=\"2\">49,572 16,904.4K</td><td colspan=\"2\">341 800 1.0</td></tr><tr><td colspan=\"2\">PAN18-FF prose</td><td/><td>20 (12)</td><td>88</td><td>69.7K</td><td>796</td><td>7 0.0</td></tr><tr><td>Reddit</td><td>social media comments</td><td/><td colspan=\"3\">45 (28) 2,366 1,259.7K</td><td>532</td><td>94 83.5</td></tr><tr><td>Table 1:</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"5\">\u2022 Topics: different news article topics, different</td></tr><tr><td/><td/><td/><td colspan=\"3\">fan fiction domains, etc.</td><td/></tr></table>",
"text": "Datasets used in this paper. \u00d7 T,G,L denote whether the datasets are cross-topic, cross-genre or crosslanguage, respectively. A denotes the total number of authors. The Reddit and PAN18-FF datasets have sub-problems with disjunct authors, and the number in parenthesis denotes the mean number of authors per sub-problem. D denotes the total number of documents. |d| is the mean length of a document measured in number of words. D/A denotes the average number of documents available for training per author. Imb is the imbalance of the dataset, measured by the standard deviation of the number of documents per author.",
"num": null,
"html": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Train/test splits for each dataset. * Evaluations with identical training documents were combined.",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">char. 3-grams</td><td colspan=\"2\">u.POS 3-grams</td><td/><td>DT-grams</td><td/><td colspan=\"2\">Doc2Vec w. chars</td></tr><tr><td/><td/><td/><td colspan=\"2\">Doc2Vec w. words</td><td/><td>BERT</td><td/><td>DistilBERT</td><td/><td colspan=\"2\">RoBERTa</td></tr><tr><td/><td>0.8</td><td/><td/><td/><td/><td colspan=\"2\">0.8</td><td/><td/><td/><td/></tr><tr><td>macro F1</td><td>0.4 0.6</td><td/><td/><td/><td/><td colspan=\"2\">0.4 0.6</td><td/><td/><td/><td/></tr><tr><td/><td>0.2</td><td/><td/><td/><td/><td colspan=\"2\">0.2</td><td/><td/><td/><td/></tr><tr><td/><td>0</td><td>5 10</td><td>20</td><td>30</td><td>40</td><td>50</td><td>0</td><td>5 10</td><td>20</td><td>30</td><td>40</td><td>50</td></tr><tr><td/><td/><td/><td colspan=\"3\">Training documents per author</td><td/><td/><td colspan=\"4\">Training documents per author</td></tr><tr><td/><td/><td/><td colspan=\"2\">(a) IMDb62</td><td/><td/><td/><td/><td colspan=\"2\">(b) CCAT50</td><td/></tr><tr><td colspan=\"2\">Figure 1:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "Mean macro-averaged F1 scores of all splits for each dataset and tested model. \u00d7 G denotes cross-genre splitting, while \u00d7 T denotes cross-topic splitting of the CMCC and Guardian datasets. F1 score of subsets of the IMDb62 (left) and CCAT50 (right) datasets with controlled number of documents per author. Note that although the number of authors and document sizes are comparable (cf. Table 1), the performances of the transformer-based models differ significantly, especially when few training samples are available for each author.",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "Scores used to reflect the models performance on the different aspects of the datasets.",
"num": null,
"html": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td>28</td></tr><tr><td>BERT</td><td>0.82 0.45 0.33 0.66 0.35 0.17 0.46</td></tr><tr><td>DistilBERT</td><td>0.82 0.43 0.39 0.61 0.33 0.12 0.45</td></tr><tr><td>RoBERTa</td><td>0.82 0.42 0.42 0.70 0.33 0.25 0.49</td></tr></table>",
"text": "2012; Tschug-Model mono sm ml \u00d7 T \u00d7 G \u00d7 L avg char. 3-grams 0.84 0.62 0.49 0.70 0.65 0.07 0.56 u.POS 3-grams 0.77 0.51 0.38 0.65 0.48 0.14 0.49 DT-grams 0.70 0.38 0.28 0.50 0.30 0.17 0.39 Doc2Vec char 0.25 0.19 0.38 0.26 0.23 0.04 0.22 Doc2Vec word 0.48 0.25 0.22 0.36 0.31 0.06 0.",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>Model</td><td>mono sm ml \u00d7 T \u00d7 G \u00d7 L avg</td></tr><tr><td colspan=\"2\">char. 3-grams 0.19 0.05 0.11 0.12 0.23 0.06 0.13</td></tr><tr><td colspan=\"2\">u.POS 3-grams 0.22 0.09 0.29 0.14 0.21 0.09 0.17</td></tr><tr><td>DT-grams</td><td>0.24 0.01 0.16 0.15 0.13 0.08 0.13</td></tr><tr><td colspan=\"2\">Doc2Vec char 0.08 0.05 0.23 0.14 0.06 0.04 0.10</td></tr><tr><td colspan=\"2\">Doc2Vec word 0.11 0.04 0.10 0.14 0.11 0.05 0.09</td></tr><tr><td>BERT</td><td>0.22 0.14 0.19 0.17 0.24 0.13 0.18</td></tr><tr><td>DistilBERT</td><td>0.22 0.13 0.19 0.18 0.20 0.09 0.17</td></tr><tr><td>RoBERTa</td><td>0.23 0.21 0.13 0.14 0.21 0.11 0.17</td></tr></table>",
"text": "Aggregated F1 scores reached by the models tested in our experiments.",
"num": null,
"html": null
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"text": "Aggregated standard deviations of F1 scores reached by the models tested in our experiments.",
"num": null,
"html": null
}
}
}
}