Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R15-1009",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:57:11.520404Z"
},
"title": "Maximal Repeats Enhance Substring-based Authorship Attribution",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Brixtel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lausanne",
"location": {
"addrLine": "Quartier Dorigny",
"postCode": "1015",
"settlement": "Lausanne",
"country": "Switzerland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This article tackles the Authorship Attribution task according to the language independence issue. We propose an alternative of variable length character n-grams features in supervised methods: maximal repeats in strings. When character ngrams are by essence redundant, maximal repeats are a condensed way to represent any substring of a corpus. Our experiments show that the redundant aspect of n-grams contributes to the efficiency of character-based techniques. Therefore, we introduce a new way to weight features in vector based classifier by introducing n-th order maximal repeats (maximal repeats detected in a set of maximal repeats). The experimental results show higher performance with maximal repeats, with less data than n-grams based approach (approximately divided by a factor of 10).",
"pdf_parse": {
"paper_id": "R15-1009",
"_pdf_hash": "",
"abstract": [
{
"text": "This article tackles the Authorship Attribution task according to the language independence issue. We propose an alternative of variable length character n-grams features in supervised methods: maximal repeats in strings. When character ngrams are by essence redundant, maximal repeats are a condensed way to represent any substring of a corpus. Our experiments show that the redundant aspect of n-grams contributes to the efficiency of character-based techniques. Therefore, we introduce a new way to weight features in vector based classifier by introducing n-th order maximal repeats (maximal repeats detected in a set of maximal repeats). The experimental results show higher performance with maximal repeats, with less data than n-grams based approach (approximately divided by a factor of 10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Internet makes it easy to let anyone share his opinion, to communicate news or to disseminate his literary production. A main feature of textual traces on the web is that they are mostly anonymous. Textual data mining is used to characterise authors, by categories (e.g. gender, age, political opinion) or as individuals. The latter case is called the Authorship Attribution (AA) issue. It consists of predicting the author of a text given a predefined set of candidates, thus falling in the supervised machine learning subdomain. This problem is often expressed as the ultimate objective, finding the author. Technically the task is to predict a new pair, considering given pairs linking text and author. It is also known as writeprint, in reference of fingerprint in written productions. For a survey, see (Koppel et al., 2009; Stamatatos, 2009; El Bouanani and Kassou, 2014) .",
"cite_spans": [
{
"start": 808,
"end": 829,
"text": "(Koppel et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 830,
"end": 847,
"text": "Stamatatos, 2009;",
"ref_id": "BIBREF14"
},
{
"start": 848,
"end": 877,
"text": "El Bouanani and Kassou, 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For AA, stylometry is most often used. The assumption is that a writer leaves unintended clues that lead to his identification. Bouanani et al. (2014) define a set of numerical features that remains relatively constant for a given author and sufficiently contrasts his writing style against any author's style. In the previous studies, numerical data such as word-length, and literal data such as words or character strings were used to capture personal style features (Koppel et al., 2011) . Unlike words or lemmas that belong to a priori resources, character strings are in compliance with a language independent objective. Supervised machine learning techniques are used to learn author's profile, from a training set where text and author pairs are known. Eventually, results are used to attribute new texts to the right author. This is a multi-variate classification problem. Support Vector Machine (SVM) is one of the favorite approaches to handle such complex tasks (Sun et al., 2012) . This is the chosen solution here.",
"cite_spans": [
{
"start": 469,
"end": 490,
"text": "(Koppel et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 973,
"end": 991,
"text": "(Sun et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AA therefore consists of predicting the author of a textual message given a predefined set of candidates. The difficulty of the task depends on its scope and the choice of the training set. It increases when the objects of study come from the web, with different textual genres, styles or languages. Research on AA can focus on several issues. Item scalability addresses matching text with a huge number of authors. Language independence requires techniques that are efficient irrespective of language resources such as lexica.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, the language independence issue is addressed, with character-based methods. However, computation of all the character subtrings in a text is costly. The major contribution of this paper is a new way to handle character substrings, to reduce the training data and therefore the training time and cost, without loosing accuracy in AA. The well-known variable length character n-grams approach is compared to a variable length max-imal repeats approach. As a controversial statement, experiments conducted in this article highlight that the redundancy of features based on ngrams is beneficial in a classification task as AA. This introduces a new way to weight features that takes into account this redundancy with n-th order maximal repeats (maximal repeats in a set of maximal repeats). Experiments are conducted on three corpora: one in English, one in French and the concatenation of those two corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this article is organized as follows. Section 2 describes related work and commonly used features. Section 3 introduces the experimental settings, the characteristics of the corpora and the experimental pipeline. Section 4 describes features, detailing the maximal repeats algorithm. Section 5 details experimental results. Section 6 concludes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "AA is a single-label multi-class categorisation task. Three characteristics have to be defined (Sun et al., 2012) : single feature, set of features representing a text and the way to handle those sets to match a text with an author.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Sun et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "AA features exploited in the literature can be separated in different groups as advocated by Abbasi et al. (2008) : numerical values associated with words (total number of words, number of character per word, number of character bi/tri-grams), hence called lexical; mixed values associated with syntax at sentence level (frequency of function words, n-grams of Part-Of-Speech tags); numerical values associated with bigger units (number of paragraphs, average length of paragraphs), called structural; values associated with content (bag-ofwords, word bi-grams/tri-grams); and a last group called idiosyncratic related with individual use (misspellings, use of Leet speak).",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "Abbasi et al. (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Definition",
"sec_num": "2.1"
},
{
"text": "Among those features, some are specific to some types of language and writing systems. For instance, tokenizing a text in words is common in word separating cases, but is a non-trivial task in Chinese or Japanese. Part-Of-Speech (POS) tagging requires specific tools that might lack in some languages. Approaches based on character n-grams appear to be the simplest and the most accurate methods when the aim is to handle any language (Grieve, 2007; Stamatatos, 2006) .",
"cite_spans": [
{
"start": 435,
"end": 449,
"text": "(Grieve, 2007;",
"ref_id": "BIBREF6"
},
{
"start": 450,
"end": 467,
"text": "Stamatatos, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Definition",
"sec_num": "2.1"
},
{
"text": "But, as advocated by Bender et al. (2009) , a language independent method should not be a language naive method. If the extraction of n-grams is done whatever the language, the n parameter has to be chosen according to the properties of the processed language. The same results cannot be expected for the same parameter on different languages according to their morphological typology (e.g. inflected or agglutinative languages). Sun et al. (2012) argue that using a fixed value of n can only capture lexical informations (for small values of n), contextual or thematic informations (for larger values), but do not explain why or whether this is valid for Chinese or all languages. The authors argue that this issue is avoided by exploiting variable length n-grams (substrings of length in [1, n]). Variable length substrings are exploited in this study to see how this parameter impacts the results in French and English.",
"cite_spans": [
{
"start": 21,
"end": 41,
"text": "Bender et al. (2009)",
"ref_id": "BIBREF1"
},
{
"start": 430,
"end": 447,
"text": "Sun et al. (2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features Definition",
"sec_num": "2.1"
},
{
"text": "A single feature can be allocated to several text and author pairs. Each text and author does not systematically share the same set of features. Different sets of features can be defined to represent texts (and by extension, to represent authors). From existing methods, two main categories of set of features can be defined for AA:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Text/Author Representation",
"sec_num": "2.2"
},
{
"text": "\u2022 off-line set of features: features a priori considered relevant with prior knowledge, as those deeply described by Chaski et al. (2001) . They are defined without the knowledge of the corpus to be processed. \u2022 on-line set of features: features defined according to the current analysis (according to the training and test corpora for supervised methods, as the character language models described by Peng et al. (2003) ). They can only be defined when the corpora to be processed (test and training) are fully collected.",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "Chaski et al. (2001)",
"ref_id": "BIBREF3"
},
{
"start": 402,
"end": 420,
"text": "Peng et al. (2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Text/Author Representation",
"sec_num": "2.2"
},
{
"text": "On-line sets of features naturally match with the language-independence aim. The characteristics of the corpora are exploited without any external resource. The method described hereafter follows this principle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Text/Author Representation",
"sec_num": "2.2"
},
{
"text": "Different techniques for handling features extracted from texts have been proposed. SVM and Neural Network are established ways to conduct AA in the supervised machine-learning paradigm (Kacmarcik and Gamon, 2006; Tweedie et al., 1996) . When the set of authorship candidates is large or incomplete, thus not including the correct author, some approaches compare sets of features with specific similarity functions (Koppel et al., 2011) . Individual level sets of features are used with machine-learning techniques to build a classifier per author. Each classifier acts as an expert dedicated to process a subarea of the features space (i.e. each classifier is specialised on detecting some specific authors). The experiments described in this article use an SVM classifier, keeping the same parameters for each experiment, to analyse the impact of the features.",
"cite_spans": [
{
"start": 186,
"end": 213,
"text": "(Kacmarcik and Gamon, 2006;",
"ref_id": "BIBREF8"
},
{
"start": 214,
"end": 235,
"text": "Tweedie et al., 1996)",
"ref_id": "BIBREF17"
},
{
"start": 415,
"end": 436,
"text": "(Koppel et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Text Categorisation",
"sec_num": "2.3"
},
{
"text": "A classical AA pipeline is drawn in Figure 1 . This pipeline contains two main elements: a Features selector (features are extracted from the training and the test corpus) and a Classifier (using the features extracted in the training corpora, each message of the test corpus is classified). Experiments are conducted to highlight characteristics of substring-based AA methods. SVM is used as the classifier of the pipeline for all experiments, following Sun et al. (2012) and Brennan et al. (2012) . The features selection step is meant to extract the right features from corpora irrespective of language. The experimental pipeline is kept as simple as possible to avoid interferences in the analysis of the features selection.",
"cite_spans": [
{
"start": 455,
"end": 472,
"text": "Sun et al. (2012)",
"ref_id": "BIBREF16"
},
{
"start": 477,
"end": 498,
"text": "Brennan et al. (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experimental Pipeline and Corpora",
"sec_num": "3"
},
{
"text": "D is a dataset for stylometric analysis containing I texts and K authors. t i is the i-th text and a k the k-th author. F is the set of all the features in the dataset D, F i the set of features of t i . Each text t i is represented as a vector of features. Considering o (i,j) the occurrence frequency of the j th feature f j of the i th text t i containing n features, the text is represented as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "t i = {o (i,0) , . . . , o (i,n\u22121) }. A weight function w can be applied on each feature of a text, w(t i ) = {w(f 0 ).o (i,0) , . . . , w(f n\u22121 ).o (i,n\u22121) }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "A classifier C is therefore trained on a subsample of texts writ-ten by preselected authors (training corpora). The set of features used is the intersection of each set of features from the test and training corpora. During experiments, similar results have been obtained with features occurring only in the training corpus, but with a much larger search space to explore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definitions",
"sec_num": "3.1"
},
{
"text": "Two corpora are exploited for experiments: a French one, the LIB corpus and an English one, the EBG corpus. Those two languages are chosen because they have many characters and linguistic characteristics in common. A third corpus, MIXT, is constituted from the merge of EBG and LIB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "A subcorpus of 40 authors, EBG, is extracted from the EXTENDED BRENNAN GREENSTADT adversarial corpus (Brennan et al., 2012) . The EBG corpus is constituted of texts exclusively in English (Table 1) 2945.1 \u00b1 178.5 Table 1 : Overall characteristics of EBG.",
"cite_spans": [
{
"start": 101,
"end": 123,
"text": "(Brennan et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 188,
"end": 197,
"text": "(Table 1)",
"ref_id": null
},
{
"start": 213,
"end": 220,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "The second corpus is extracted from the website of the French newspaper LIB\u00c9RATION. The LIB corpus contains texts from 40 different authors who have written in more than one journalistic categorie, such as sports or health. This is intended to minor subgenre impact, i.e. characteristics that might blur the personal style. The corpus main characteristics are drawn in Table 2 LIB contains the same number of authors as EBG, but the number of texts bounded to each author is higher (31.2 \u00b1 4.2 texts per author in LIB, 15.8 \u00b1 2.6 in EBG). All texts in LIB and EBG are longer than the 250 words limit (\u2248 1500 characters), the minimum length considered effective for authorship analysis seen as a text classification task (Forsyth and Holmes, 1996) .",
"cite_spans": [
{
"start": 720,
"end": 746,
"text": "(Forsyth and Holmes, 1996)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 369,
"end": 376,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "The MIXT corpus, 80 authors with texts in both English and French, is obtained from the merge of EBG and LIB. It is built to erase language distinctions. During experiments, tests are also driven on different subcorpora of EBG, LIB and MIXT. We denote EBG-10 (respectively LIB-10 and MIXT-10) a sample of 10 authors from the EBG corpus (respectively LIB and MIXT). Note that the MIXT-20, . . . , 80 are the merge of LIB-10 + EBG-10, . . . , LIB-40 + EBG-40. Experiments using these corpora are described hereafter to highlight the characteristics of the features and their differences, used in the experimental pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "3.2"
},
{
"text": "Maximal repeats, motifs in (Ukkonen, 2009) , are based on the work of Ukkonen 2009and K\u00e4rkk\u00e4inen (2006) . The algorithm is described in Section 4.1 to explain the improvements discussed in Section 4.2. Motifs are a way to represent each substring of a corpus in a condensed manner. For the detection of hapax legomena inside a set of strings from their motifs, see the work of Ilie and Smyth (2011) .",
"cite_spans": [
{
"start": 27,
"end": 42,
"text": "(Ukkonen, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 86,
"end": 103,
"text": "K\u00e4rkk\u00e4inen (2006)",
"ref_id": "BIBREF9"
},
{
"start": 377,
"end": 398,
"text": "Ilie and Smyth (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Maximal repeats are substring patterns of text with the following characteristics: they are repeated (motifs occur twice or more) and maximal (motifs cannot be expanded to the left -left maximalitynor to the right -right maximality-without lowering the frequency).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "For instance, the motifs found in the string S = HATTIVATTIAA are T, A and ATTI. TT is not a motif because it always occurs inside an occurrence of ATTI. In other words, its right-context is always I and its left-context A. All the motifs in a list of strings can be enumerated using an Augmented Suffix Array (K\u00e4rkk\u00e4inen et al., 2006) .",
"cite_spans": [
{
"start": 310,
"end": 335,
"text": "(K\u00e4rkk\u00e4inen et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "Given two strings S 0 = HATTIV and S 1 = ATTIAA, Table 3 shows the Augmented Suffix Array of S = S 0 .$ 1 .S 1 .$ 0 , where $ 0 and $ 1 are lexicographically lower than any character in the alphabet \u03a3 and $ 0 < $ 1 . The Augmented Suffix Array consists in the Suffix Array (SA), suffixes of S sorted lexicographically, with the Longest Common Prefix (LCP ) between each two suffixes that are contiguous in SA. With, n the size of S, S[i] the i th character of S, S[n, m] a sample of S from the n th character to the m th , SA i the starting offset of the suffix of S at the i th position in the lexicographical order and lcp(str 1 , str 2 ) the longest common prefix between two strings str 1 and str 2 :",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 56,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "LCPi = lcp(S[SAi, n \u2212 1], S[SAi+1, n \u2212 1]) LCPn\u22121 = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "The LCP allows the detection of all the repeats inside a set of text. The maximal criterion is still not valid because the LCP only inquires on the left maximality between repeated prefixes in SA. The substring ATTI occurs for example in S at the offsets (1, 7), according to LCP 4 in Table 3 . The process enumerates all the motifs by reading through LCP . The detection of those motifs is triggered according to the difference between a LCP and the next one in the way SA is ordered.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 292,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "i LCPi SAi S[SAi]...S[n] 0 0 13 $0 1 0 6 $1ATTIAA$0 2 1 12 A$0 3 1 11 AA$0 4 4 7 ATTIAA$0 5 0 1 ATTIV$1ATTIAA$0 6 0 0 HATTIV$1ATTIAA$0 7 1 10 IAA$0 8 0 4 IV$1ATTIAA$0 9 2 9 TIAA$0 10 1 3 TIV$1ATTIAA$0 11 3 8 TTIAA$0 12 0 2 TTIV$1ATTIAA$0 13 0 5 V$1ATTIAA$0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "For example, TTI is equivalent to ATTI because the last characters of these two motifs occur at the offsets (4, 10). They are said to be in a relation of occurrence-equivalence (Ukkonen, 2009) . In that case, ATTI is kept as a motif because it is the longest of its equivalents. The others motifs A and T are maximal because their contexts differ in different occurrences. All motifs across different strings are detected at the end of the enumeration by mapping the offsets in S with those in S 0 and S 1 . This way, any motif detected in S can be located in any of the strings S i . SA and LCP are constructed in time-complexity O(n) (K\u00e4rkk\u00e4inen et al., 2006) , while the enumeration process is done in O(k), with k defined as the number of motifs and k < n (Ukkonen, 2009) . This corroborate the statement done by Umemura and Church (2009) : there are too many substrings to work with in corpus O(n 2 ), but they can be grouped into a manageable number of interesting classes O(n).",
"cite_spans": [
{
"start": 177,
"end": 192,
"text": "(Ukkonen, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 636,
"end": 661,
"text": "(K\u00e4rkk\u00e4inen et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 760,
"end": 775,
"text": "(Ukkonen, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 817,
"end": 842,
"text": "Umemura and Church (2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximal Repeats in Strings",
"sec_num": "4.1"
},
{
"text": "Let R be the set of motifs detected in the n strings S = {S 0 , . . . , S n\u22121 }, with |S| = n i=1 size(S i ). The set of motifs R is computed on the concatenation of all strings S i : c(S) = S 0 $ n\u22121 . . . S n\u22121 $ 0 . Second order motifs R 2 in S are computed from the concatenation of the set of m strings of R (c(R) = R 0 $ m\u22121 . . . R m\u22121 $ 0 with m < |S|, and each R i a motif in S). The set of n-th order motifs is noted R n . For instance, let c(S) be HATTIV$ 1 ATTIAA$ 0 . The set of motifs R from c(S) is a compound of the following motifs: R = {ATTI, A, T}. The set of repeats R 2 consists of the motifs T (twice in ATTI and once in T) and A (once in ATTI and once in A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "n-th Order Motifs",
"sec_num": "4.2"
},
{
"text": "FACT -The set of motifs R n is a subset of R n\u22121 . REDUCTIO AD ABSURDUM -Let assume that R n \u2282 R n\u22121 . In other words, \u2203m a motif with m \u2208 R n and m \u2208 R n\u22121 . m is maximal, so it occurs with different left-contexts (denoted a and b) and different right-contexts (c and d) with a = b, c = d and a, b, c and d being any character of c(R n\u22121 ) -including the special character \u00a3 if m starts c(R n\u22121 ). R n is computed from c(R n\u22121 ) = ...amc...bmd... with R n\u22121 = {amc, bmd, ...} and m \u2208 R n\u22121 . So, amc and bmd are two motifs detected in R n\u22122 . Because m is repeated and have two differents contexts, it is a motif and should have been detected in R n\u22122 thus in R n\u22121 as well, so m \u2208 R n\u22121 -a contradiction Figure 2 draws the number of different motifs according to their order. Because R n \u2282 R n\u22121 , the number of different motifs decreases steadily whatever the corpus. The number of motifs in R n drops to 0 for n = 26 (LIB-40, EBG-40 and MIXT-80) and n = 25 (MIXT-40). The computation of 2 nd order motifs is based on the same algorithm than the one used to extract motifs. The enumeration of all the 2 nd order motifs is done in O(n) as well. Those motifs are used to detect the repetitions encapsulated in a set of maximal repeats.",
"cite_spans": [],
"ref_spans": [
{
"start": 706,
"end": 714,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "n-th Order Motifs",
"sec_num": "4.2"
},
{
"text": "Character n-grams and Motifs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Differences between",
"sec_num": "4.3"
},
{
"text": "Experiments have emphasize that redundancy in n-grams have a positive impact in AA (Subsection 5.1). To explain the effect of this redundancy, this section deals with the main differences between character n-grams and motifs, and how to exploit them when dealing with vector-based representation of texts. As defined before, motifs are a condensed way to represent all substrings of a corpus. In other words, for a fixed value of n, the set of motifs of size n is a subset of all the character n-grams of a corpus (as well with variable length substrings: motifs with length in [min, max] or character [min, max]-grams). The substrings that are not motifs are those that are only left-maximal, right-maximal (i.e. repeated but not maximal) or hapax legomena. In a supervised classification process, hapax have no impact because they only appear once in the training corpus or once in the test corpus.",
"cite_spans": [
{
"start": 578,
"end": 588,
"text": "[min, max]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Differences between",
"sec_num": "4.3"
},
{
"text": "If n-grams can catch different types of features according to n (lexical, contextual or thematic (Sun et al., 2012) ), they also catch features that can be represented by substrings of size superior to n. For instance, let abcdef be a motif, occurring k times and none of its characters occurring elsewhere in the corpus. Because abcdef is maximal, each substring of abcdef has the same occurrence frequency k. Figure 3 shows how the use of 3-grams in a string containing the abcdef motif affects the vector representation of this substring. Indeed, n-grams \"represent\" motifs of size superior to n by adding features in the vector representation of the texts according to the frequency of those motifs. Exploiting only motifs of size 3 will not allow to catch any substring of this motif with the same occurrence frequency than abcdef (according to the definition of a motif). Considering only some specific lengths affect the representation based on occurrence frequency, and vise versa according to the interdependency between frequency and length (Zipf, 1949) .",
"cite_spans": [
{
"start": 97,
"end": 115,
"text": "(Sun et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 1051,
"end": 1063,
"text": "(Zipf, 1949)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 411,
"end": 419,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Exploiting the Differences between",
"sec_num": "4.3"
},
{
"text": "2 nd order motifs are used to exploit this characteristic with this assumption: a substring is more relevant than an other of same size if it encapsulates less repeated substrings. The weight function w 2 nd (f eat) is defined as the difference between the number of substrings of a feature and the number of motifs occurring in this feature w 2 nd (f eat) = pot(f eat) \u2212 sub(f eat). pot(f eat) is the potential number of substrings occurring inside a feature. sub(f eat) is the number of motifs occurring inside a feature and elsewhere in the corpus. w 2 nd (f eat) is linked to the length of the feature and two features with the same length can be weight differently. If there is only one different character between two motifs (e.g. thing and things), the weight function minimises this add: the products of the weight function and the frequency are close together. Conversely, a feature that is more than a small variation of any other motif has more importance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Differences between",
"sec_num": "4.3"
},
{
"text": "With S = {S 0 , . . . , S n\u22121 }, R the set of motifs from S and R 2 the set of motifs from R, each motif in R can be weighted according to the set of repeats R 2 . R i is a motif used as a feature and S is the set each text of all authors. The number of different substrings in any string of size n, pot(f eat), is calculated with the formula n(n+1) 2 (eq. to the triangular number, the whole string is considered as a potential substring). The number of occurrences of each sub-repeat in R 2 occurring in a feature R, sub(f eat), is done by enumerating all the occurrences of all the motifs in a set of strings as described in Section 4.1. If each potential substring in a feature is a motif as well, then w 2 nd (f eat) = 1. During our experiments, this weight function is compared with w length (f eat) = n(n+1) 2 (with n the length of the feature). Note that w length cannot be easily applied to n-grams because the overlaps between contiguous n-grams make each potential substring of each n-gram appears elsewhere in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting the Differences between",
"sec_num": "4.3"
},
{
"text": "The experiments in this section examine the prediction accuracy of the proposed approach. Two sets of features with variable length are examined: n-grams and motifs. Three different ways to consider motifs are analysed: motifs with no weight, weighted by their length (using w length ) and weighted by 2 nd order repeats (using w 2 nd ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "A stratified 10-fold cross validation is used to validate the performances. Corpora are randomly partitioned into 10 equal size folds containing the same proportion of authors. To measure the performance of the systems, the prediction score is computed as follows: the number of correctly classified texts divided by the number of texts classified overall. SVM is used with linear kernels (adapted when the set of features is larger than the set of elements to be classified) and with the regularisation parameter C = 1. The aim of those experiments is to highlight the differences between motifs and n-grams. The same settings are therefore set whatever the feature, assuming that their impacts are similar on both n-grams and motifs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The prediction score of AA is computed in three corpora: EBG-40 ( Figure 4 ), LIB-40 ( Figure 5 ) and MIXT-80 ( Figure 6 ). Each figure is constituted of 4 matrices using different sets of features: maximal repeats (motif ), n-grams, maximal repeats weighted by length (motif length ) and maximal repeats weighted by 2 nd order repeats (motif 2 nd ). The prediction written in the coordinates (i, j) of each matrix is sourced from the use of features with length in the range [i, j] . Whatever the corpus, the features can be ordered following their ability to correctly predict the author of a text: motif \u2264 motif length < n-grams < motif 2 nd . The fact that motif s < n-grams shows the positive effect of feature redundancy. The diagonals of the matrix using motif and motif length have the same values because a single factor affects every feature on the vector representation of the texts. The overall high prediction score on the EBG corpus is explained by the bind between author and the thematic content of his written productions (for a given author, almost each of his texts is related to a single topic as sport or arts). For comparison, the systems tested by Brennan et al. (2012) obtain a prediction accuracy of approximately 80% in a sample of texts written by 40 authors in EBG as well (\u2248 \u221215%).",
"cite_spans": [
{
"start": 476,
"end": 482,
"text": "[i, j]",
"ref_id": null
},
{
"start": 1171,
"end": 1192,
"text": "Brennan et al. (2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 4",
"ref_id": "FIGREF6"
},
{
"start": 87,
"end": 95,
"text": "Figure 5",
"ref_id": null
},
{
"start": 112,
"end": 120,
"text": "Figure 6",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Impact of the Length of Variable Substrings and Maximal Repeats",
"sec_num": "5.1"
},
{
"text": "The task is more difficult on LIB because, contrary to EBG, each selected author has written texts in different thematic areas. Similar observations have been given by Stamatatos (2012) as well. The prediction on the three corpora has also been computed using motif 2 nd whatever their length, obtaining the following scores: 66.40% on EBG-40, 48.20% on LIB-40 and 54.21% MIXT-80. This emphasizes the necessity of selecting a subspace of motifs in AA. From these experiments, the best parameters for the length of the features are selected by computing the average of each prediction score on each matrix for each couple of parameters [min, max] length (Table 4) motif 2 nd features obtain the smallest range of values among the set of parameters computed. Note that the best length parameter extracted for all the corpora is not necessarily the best parameters for each corpus (i.e. motif 2 nd have better results with parameters [6, 6] in LIB than with [4, 5] ). Aside from offering a condensed representation of substrings, motifs need less elements to perform better than other methods. The experiments show better results with variable length features than with fixed length ones. Using a large range of size in substring selection is not systematically the best option according to the results. For instance, a 4.01% discrepancy is observable between the range [1, 6] and the optimal range [4, 5] on the results on LIB using motif 2 nd features ( Figure 5 ).",
"cite_spans": [
{
"start": 168,
"end": 185,
"text": "Stamatatos (2012)",
"ref_id": "BIBREF15"
},
{
"start": 931,
"end": 934,
"text": "[6,",
"ref_id": null
},
{
"start": 935,
"end": 937,
"text": "6]",
"ref_id": null
},
{
"start": 955,
"end": 958,
"text": "[4,",
"ref_id": null
},
{
"start": 959,
"end": 961,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 653,
"end": 662,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 1453,
"end": 1461,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of the Length of Variable Substrings and Maximal Repeats",
"sec_num": "5.1"
},
{
"text": "Given the best parameters for each type of features (Table 4) , the following experiments draw the evolution of the prediction based upon the number of authors (Figure 7) . Whatever the corpus and the type of features, the prediction score decreases steadily as the number of author increases. The corpus with the worst results is still LIB where the prediction score decreases from 92.04% to 77.38% (89.60% to 76.82% with n-grams). The prediction using motif 2 nd is higher than with the others methods. Moreover, weighting features by a factor of their length (motif length ) does not enhance significantly motif -based representations of text. The numbers of features used for the prediction is given on Figure 8 . This number of features is the average of the length of the vector representing texts in each fold of the cross-validation.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 61,
"text": "(Table 4)",
"ref_id": "TABREF6"
},
{
"start": 160,
"end": 170,
"text": "(Figure 7)",
"ref_id": null
},
{
"start": 707,
"end": 715,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of the Number of Authors on the Prediction and the Number of Features",
"sec_num": "5.2"
},
{
"text": "Considering the motifs of length [4, 5] reduce considerably the number of features with regards to the number of substrings with size [4, 6] or the number of motifs of any size. The number of motifs grows linearly with the number of authors (i.e. with the size of the corpus). The number of substrings with length [4, 6] is higher than the number of motifs at the beginning of the curve, but is lower after a certain amount of data due to its sublinear distribution. The number of motifs of size [4, 5] seems to scale with the increase of data processed.",
"cite_spans": [
{
"start": 33,
"end": 36,
"text": "[4,",
"ref_id": null
},
{
"start": 37,
"end": 39,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Influence of the Number of Authors on the Prediction and the Number of Features",
"sec_num": "5.2"
},
{
"text": "The corpus MIXT is composed of the LIB corpus in French and the EBG corpus in English, both languages share pattern substrings because of their common origin. The use of two similar languages is well adapted to analyse the effects of the features in multilingual corpora. Table 5 shows the prediction accuracy on the two monolingual corpora, LIB and EBG, after applying the above methods on the multilingual corpus MIXT. The aim is to analyse how the features behave when different languages are processed at the same time. Table 5 : Predictions on LIB and EBG from the MIXT corpus using substrings with length in [4, 6] and motifs weighted by 2 nd order motifs with length in [4, 5] .",
"cite_spans": [
{
"start": 614,
"end": 617,
"text": "[4,",
"ref_id": null
},
{
"start": 618,
"end": 620,
"text": "6]",
"ref_id": null
},
{
"start": 677,
"end": 680,
"text": "[4,",
"ref_id": null
},
{
"start": 681,
"end": 683,
"text": "5]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 272,
"end": 279,
"text": "Table 5",
"ref_id": null
},
{
"start": 524,
"end": 531,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Monolingual Evaluation from Multilingual Corpora",
"sec_num": "5.3"
},
{
"text": "The results with the two settings, the multilingual corpus and each corpus processed independently, are close to each other. However, some improvements can be seen with the use of motif 2 nd , where in more cases the results are better when EBG and LIB are handled together. Using ngrams, the difference of results grows when the number of authors increases. On the contrary, using motifs seem to be adapted to this issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Evaluation from Multilingual Corpora",
"sec_num": "5.3"
},
{
"text": "We proposed an efficient alternative to variable length n-grams approaches for AA with the use of maximal repeats in strings. They improve classical substring approaches in two major ways. First, maximal repeats are, in essence, non-redundant features compared with n-grams. Their maximality characteristic avoids the use of redundant occurrence equivalent substrings in corpora. This considerably reduces the feature space size and we advocate that they are a best breeding ground for variable subset selection (as Genetic Algorithm, Simulated Annealing, or Information Gain). Second, with the second order maximal repeats, the feature search space is condensed efficiently and propose a new way to enhance the prediction accuracy in AA. We have emphasize the positive effect of redundancy in features, and by doing so we validated the assumption that a long repeated substring is more important if it does not contain too many sub-repeats, thus guaranteeing consistency. We hope this research will herald more improvements in substring-based Authorship Attribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Writeprints: A stylometric approach to identitylevel identification and similarity detection in cyberspace",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Abbasi",
"suffix": ""
},
{
"first": "Hsinchun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Transactions on Information Systems (TOIS)",
"volume": "26",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity- level identification and similarity detection in cyberspace. ACM Transactions on Information Systems (TOIS), 26(2):7.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Linguistically na\u00efve != language independent: Why NLP needs linguistic typology",
"authors": [
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the EACL 2009 Workshop on the Interaction Between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?, ILCL '09",
"volume": "",
"issue": "",
"pages": "26--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M. Bender. 2009. Linguistically na\u00efve != lan- guage independent: Why NLP needs linguistic ty- pology. In Proceedings of the EACL 2009 Workshop on the Interaction Between Linguistics and Compu- tational Linguistics: Virtuous, Vicious or Vacuous?, ILCL '09, pages 26-32. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Brennan",
"suffix": ""
},
{
"first": "Sadia",
"middle": [],
"last": "Afroz",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Greenstadt",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Transactions on Information and System Security (TISSEC)",
"volume": "15",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Brennan, Sadia Afroz, and Rachel Green- stadt. 2012. Adversarial stylometry: Circumvent- ing authorship recognition to preserve privacy and anonymity. ACM Transactions on Information and System Security (TISSEC), 15(3):12.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Empirical evaluations of language-based author identification techniques. Forensic Linguistics",
"authors": [
{
"first": "Carole",
"middle": [
"E"
],
"last": "Chaski",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "8",
"issue": "",
"pages": "1--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carole E Chaski. 2001. Empirical evaluations of language-based author identification techniques. Forensic Linguistics, 8:1-65.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Authorship analysis studies: A survey",
"authors": [],
"year": 2014,
"venue": "International Journal of Computer Applications",
"volume": "86",
"issue": "",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara El Manar El Bouanani and Ismail Kassou. 2014. Authorship analysis studies: A survey. Interna- tional Journal of Computer Applications, 86:22-29.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Featurefinding for text classification. Literary and Linguistic Computing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "David I",
"middle": [],
"last": "Forsyth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holmes",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "11",
"issue": "",
"pages": "163--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard S Forsyth and David I Holmes. 1996. Feature- finding for text classification. Literary and Linguis- tic Computing, 11(4):163-174.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quantitative authorship attribution: An evaluation of techniques. Literary and linguistic computing",
"authors": [
{
"first": "Jack",
"middle": [],
"last": "Grieve",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "22",
"issue": "",
"pages": "251--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Grieve. 2007. Quantitative authorship attribution: An evaluation of techniques. Literary and linguistic computing, 22(3):251-270.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum unique substrings and maximum repeats. Fundamenta Informaticae",
"authors": [
{
"first": "Lucian",
"middle": [],
"last": "Ilie",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "110",
"issue": "",
"pages": "183--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucian Ilie and William F Smyth. 2011. Minimum unique substrings and maximum repeats. Funda- menta Informaticae, 110(1):183-195.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Obfuscating document stylometry to preserve author anonymity",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Kacmarcik",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL on Main conference poster sessions",
"volume": "",
"issue": "",
"pages": "444--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Kacmarcik and Michael Gamon. 2006. Ob- fuscating document stylometry to preserve author anonymity. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 444-451. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Linear work suffix array construction",
"authors": [
{
"first": "Juha",
"middle": [],
"last": "K\u00e4rkk\u00e4inen",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Burkhardt",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the ACM",
"volume": "53",
"issue": "6",
"pages": "918--936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juha K\u00e4rkk\u00e4inen, Peter Sanders, and Stefan Burkhardt. 2006. Linear work suffix array construction. Jour- nal of the ACM, 53(6):918-936.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Computational methods in authorship attribution",
"authors": [
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schler",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of the American Society",
"volume": "60",
"issue": "1",
"pages": "9--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moshe Koppel, Jonathan Schler, and Shlomo Arga- mon. 2009. Computational methods in authorship attribution. Journal of the American Society for in- formation Science and Technology, 60(1):9-26.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Authorship attribution in the wild. Language Resources and Evaluation",
"authors": [
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Schler",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "45",
"issue": "",
"pages": "83--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moshe Koppel, Jonathan Schler, and Shlomo Arga- mon. 2011. Authorship attribution in the wild. Lan- guage Resources and Evaluation, 45(1):83-94.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Language independent authorship attribution using character level language models",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Vlado",
"middle": [],
"last": "Keselj",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "267--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuchun Peng, Dale Schuurmans, Shaojun Wang, and Vlado Keselj. 2003. Language independent author- ship attribution using character level language mod- els. In Proceedings of the tenth conference on Euro- pean chapter of the Association for Computational Linguistics-Volume 1, pages 267-274. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ensemble-based author identification using character n-grams",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 3rd International Workshop on Textbased Information Retrieval",
"volume": "",
"issue": "",
"pages": "41--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2006. Ensemble-based au- thor identification using character n-grams. In Pro- ceedings of the 3rd International Workshop on Text- based Information Retrieval, pages 41-46.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A survey of modern authorship attribution methods",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of the American Society for Information Science and Technology",
"volume": "60",
"issue": "3",
"pages": "538--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2009. A survey of modern au- thorship attribution methods. Journal of the Ameri- can Society for Information Science and Technology, 60(3):538-556.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On the robustness of authorship attribution based on character n-gram features",
"authors": [
{
"first": "Efstathios",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2012,
"venue": "JL & Pol'y",
"volume": "21",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Efstathios Stamatatos. 2012. On the robustness of au- thorship attribution based on character n-gram fea- tures. JL & Pol'y, 21:421.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Applying stylometric analysis techniques to counter anonymity in cyberspace",
"authors": [
{
"first": "Jianwen",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zongkai",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sanya",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Networks",
"volume": "7",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianwen Sun, Zongkai Yang, Sanya Liu, and Pei Wang. 2012. Applying stylometric analysis techniques to counter anonymity in cyberspace. Journal of Net- works, 7(2).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Neural network applications in stylometry: The Federalist papers. Computers and the Humanities",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Fiona J Tweedie",
"suffix": ""
},
{
"first": "David I",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holmes",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "30",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fiona J Tweedie, Sameer Singh, and David I Holmes. 1996. Neural network applications in stylometry: The Federalist papers. Computers and the Humani- ties, 30(1):1-10.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maximal and minimal representations of gapped and non-gapped motifs of a string",
"authors": [
{
"first": "",
"middle": [],
"last": "Esko Ukkonen",
"suffix": ""
}
],
"year": 2009,
"venue": "Theoretical Computer Science",
"volume": "410",
"issue": "43",
"pages": "4341--4349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esko Ukkonen. 2009. Maximal and minimal represen- tations of gapped and non-gapped motifs of a string. Theoretical Computer Science, 410(43):4341-4349.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Substring statistics",
"authors": [
{
"first": "Kyoji",
"middle": [],
"last": "Umemura",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "53--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyoji Umemura and Kenneth Church. 2009. Substring statistics. In Computational Linguistics and Intelli- gent Text Processing, pages 53-71. Springer.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Human Behaviour and the Principle of Least-Effort : an Introduction to Human Ecology",
"authors": [
{
"first": "George",
"middle": [],
"last": "Kingsley",
"suffix": ""
},
{
"first": "Zipf",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1949,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Kingsley Zipf. 1949. Human Behaviour and the Principle of Least-Effort : an Introduction to Hu- man Ecology. Addison-Wesley.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "to r of fea tu r e s t r a in ing cor p o r a"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Pipeline processing for supervised AA."
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "motifs (log. scale) i-th order of maximal repeats"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Evolution of the number of motifs (log. scale) according to the i-th order"
},
"FIGREF5": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Substrings of a motif in a string."
},
"FIGREF6": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Prediction accuracy in EBG-40."
},
"FIGREF7": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Prediction accuracy in MIXT-80."
},
"FIGREF8": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Evolution of the prediction accuracy according to the number of authors. Evolution of the number of features according to the number of authors."
},
"TABREF3": {
"html": null,
"type_str": "table",
"text": "Overall characteristics of LIB.",
"num": null,
"content": "<table/>"
},
"TABREF4": {
"html": null,
"type_str": "table",
"text": "Augmented Suffix Array (SA and LCP ) of S = HATTIV$ 1 ATTIAA$ 0 .",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"html": null,
"type_str": "table",
"text": ".",
"num": null,
"content": "<table><tr><td/><td>best length parameter</td><td>average prediction</td></tr><tr><td/><td>[min, max]</td><td/></tr><tr><td>n-grams</td><td>[4, 6]</td><td>84.61%</td></tr><tr><td>motifs</td><td>[4, 6]</td><td>83.69%</td></tr><tr><td>motifs (length)</td><td>[4, 6]</td><td>83.88%</td></tr><tr><td>motifs (2 nd order)</td><td>[4, 5]</td><td>85.39%</td></tr></table>"
},
"TABREF6": {
"html": null,
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: Best parameters on LIB-40, EBG-40</td></tr><tr><td>and MIXT-80.</td></tr></table>"
}
}
}
}