ACL-OCL / Base_JSON /prefixA /json /alta /2020.alta-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:10:52.105593Z"
},
"title": "Feature-Based Forensic Text Comparison Using a Poisson Model for Likelihood Ratio Estimation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Carne",
"suffix": "",
"affiliation": {
"laboratory": "Speech and Language Laboratory",
"institution": "The Australian National University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Shunichi",
"middle": [],
"last": "Ishihara",
"suffix": "",
"affiliation": {
"laboratory": "Speech and Language Laboratory",
"institution": "The Australian National University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Score-and feature-based methods are the two main ones for estimating a forensic likelihood ratio (LR) quantifying the strength of evidence. In this forensic text comparison (FTC) study, a score-based method using the Cosine distance is compared with a feature-based method built on a Poisson model with texts collected from 2,157 authors. Distance measures (e.g. Burrows's Delta, Cosine distance) are a standard tool in authorship attribution studies. Thus, the implementation of a score-based method using a distance measure is naturally the first step for estimating LRs for textual evidence. However, textual data often violates the statistical assumptions underlying distance-based models. Furthermore, such models only assess the similarity, not the typicality, of the objects (i.e. documents) under comparison. A Poisson model is theoretically more appropriate than distance-based measures for authorship attribution, but it has never been tested with linguistic text evidence within the LR framework. The log-LR cost (Cllr) was used to assess the performance of the two methods. This study demonstrates that: (1) the feature-based method outperforms the score-based method by a Cllr value of ca. 0.09 under the best-performing settings and; (2) the performance of the featurebased method can be further improved by feature selection.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Score-and feature-based methods are the two main ones for estimating a forensic likelihood ratio (LR) quantifying the strength of evidence. In this forensic text comparison (FTC) study, a score-based method using the Cosine distance is compared with a feature-based method built on a Poisson model with texts collected from 2,157 authors. Distance measures (e.g. Burrows's Delta, Cosine distance) are a standard tool in authorship attribution studies. Thus, the implementation of a score-based method using a distance measure is naturally the first step for estimating LRs for textual evidence. However, textual data often violates the statistical assumptions underlying distance-based models. Furthermore, such models only assess the similarity, not the typicality, of the objects (i.e. documents) under comparison. A Poisson model is theoretically more appropriate than distance-based measures for authorship attribution, but it has never been tested with linguistic text evidence within the LR framework. The log-LR cost (Cllr) was used to assess the performance of the two methods. This study demonstrates that: (1) the feature-based method outperforms the score-based method by a Cllr value of ca. 0.09 under the best-performing settings and; (2) the performance of the featurebased method can be further improved by feature selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The essential part of any source-detection task is to assess the similarity or difference between the objects or items under comparison. For this purpose, in stylometric studies too, various distance measures have been devised and tested, particularly in studies concerned with the authorship of text sources (Argamon, 2008; Burrows, 2002; Hoover, 2004a; Smith and Aldridge, 2011) . Burrows's Delta (Burrows, 2002) is probably the most studied distance measure in stylometric studies, and its effectiveness and robustness have been demonstrated for a variety of texts from different genres and languages (AbdulRazzaq and Mustafa, 2014; Hoover, 2004b; Rybicki and Eder, 2011; \u00deorgeirsson, 2018) . Since Burrows (2002) , several variants, including, for example, those based on Euclidian distance, Cosine similarity and Mahalanobis distance have been proposed to better deal with the unique characteristics of linguistic texts, expecting to result in a better identification and discrimination performance (Argamon, 2008; Eder, 2015; Hoover, 2004b; Smith and Aldridge, 2011) .",
"cite_spans": [
{
"start": 309,
"end": 324,
"text": "(Argamon, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 325,
"end": 339,
"text": "Burrows, 2002;",
"ref_id": "BIBREF14"
},
{
"start": 340,
"end": 354,
"text": "Hoover, 2004a;",
"ref_id": "BIBREF30"
},
{
"start": 355,
"end": 380,
"text": "Smith and Aldridge, 2011)",
"ref_id": "BIBREF52"
},
{
"start": 399,
"end": 414,
"text": "(Burrows, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 604,
"end": 635,
"text": "(AbdulRazzaq and Mustafa, 2014;",
"ref_id": "BIBREF0"
},
{
"start": 636,
"end": 650,
"text": "Hoover, 2004b;",
"ref_id": "BIBREF32"
},
{
"start": 651,
"end": 674,
"text": "Rybicki and Eder, 2011;",
"ref_id": "BIBREF51"
},
{
"start": 675,
"end": 693,
"text": "\u00deorgeirsson, 2018)",
"ref_id": "BIBREF54"
},
{
"start": 702,
"end": 716,
"text": "Burrows (2002)",
"ref_id": "BIBREF14"
},
{
"start": 1004,
"end": 1019,
"text": "(Argamon, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 1020,
"end": 1031,
"text": "Eder, 2015;",
"ref_id": "BIBREF19"
},
{
"start": 1032,
"end": 1046,
"text": "Hoover, 2004b;",
"ref_id": "BIBREF32"
},
{
"start": 1047,
"end": 1072,
"text": "Smith and Aldridge, 2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Similarity-and distance-based measures make some assumptions about the distribution of the underlying data. For example, a Laplace distribution is assumed by Burrows's Delta, which itself is based on Manhattan distance, and a normal distribution by the Euclidean and cosine distances. However, it is well known that stylometric features do not always conform to, for example, a normal distribution (Argamon, 2008; Jannidis et al., 2015) . Moreover, a normal distribution is not theoretically appropriate for discrete count data (e.g. occurrences of function words) Figure 1 shows the distributions of the counts of three words ('a', 'not' and 'they'), sampled from the database used in the current study. Frequently-occurring words, such as 'a' (Figure 1a ), tend to be normally distributed. However the distribution starts skewing positively for less-frequently-occurring words, such as 'not' (Figure 1b) and 'they' (Figure 1c) . In order to fill this gap between the theoretical assumption arising from distance measures and the nature of textual data, a one-level Poisson model is used in this study.",
"cite_spans": [
{
"start": 398,
"end": 413,
"text": "(Argamon, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 414,
"end": 436,
"text": "Jannidis et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 565,
"end": 573,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 745,
"end": 755,
"text": "(Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 894,
"end": 905,
"text": "(Figure 1b)",
"ref_id": "FIGREF0"
},
{
"start": 917,
"end": 928,
"text": "(Figure 1c)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the 1990s, the success of DNA analysis and some important United States court rulings, estab-lishing the standard for expert evidence to be admitted in court, promoted the likelihood ratio (LR)based approach as the standard for evaluating and presenting forensic evidence in court (Association of Forensic Science Providers, 2009) . Although it is far less extensively studied than other areas of forensic science, it has been demonstrated that the LR framework can be applied successfully to linguistic textual evidence (Ishihara, 2014 (Ishihara, , 2017a (Ishihara, , 2017b .",
"cite_spans": [
{
"start": 284,
"end": 333,
"text": "(Association of Forensic Science Providers, 2009)",
"ref_id": null
},
{
"start": 524,
"end": 539,
"text": "(Ishihara, 2014",
"ref_id": "BIBREF33"
},
{
"start": 540,
"end": 558,
"text": "(Ishihara, , 2017a",
"ref_id": "BIBREF34"
},
{
"start": 559,
"end": 577,
"text": "(Ishihara, , 2017b",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two methods for deriving an LR model for forensic data, score-and feature-based. Each method has different strengths and shortcomings. The use of score-based methods is prevalent across different types of forensic evidence due to its robustness and ease of implementation relative to feature-based methods. The advantages and disadvantages of the methods are explained in \u00a73.3 and \u00a73.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "1.1"
},
{
"text": "Almost all previous LR studies, both featureand score-based, use continuous data for LR estimation. Studies using feature-based LR models derived from probability distributions appropriate for discrete (or categorical) forensic features are rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "1.1"
},
{
"text": "To the best of our knowledge, Aitken and Gold (2013) and Bolck and Stamouli (2017) are the only two existing studies of this kind within the LR framework. Aitken and Gold (2013) propose a univariate discrete model for estimating LRs. They conducted only a small-scale experiment using limited data and features, which were used mainly for explanatory purposes. Bolck and Stamouli (2017) investigate discrete multivariate models for estimating LRs using categorical data from gunshot residue. This study however uses a relatively low-dimensional feature space (only 12 features), and its modelling approach assumes independence between features. Text evidence however usually involves high-dimensional vector spaces and independence cannot be assumed, given correlation between features. The present study seeks to investigate these challenges in LR-based forensic text comparison (FTC) using discrete textual data in the form of counts of the N most frequently occurring words. It implements a feature-based LR model derived from the Poisson distribution, with logistic-regression fusion and calibration used as a means for dealing with correlation between features. This approach is compared to a score-based method using the cosine distance. To the best of our knowledge, this is the first FTC study to trial a feature-based method with a Poisson model in the LR framework.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "Aitken and Gold (2013)",
"ref_id": "BIBREF1"
},
{
"start": 57,
"end": 82,
"text": "Bolck and Stamouli (2017)",
"ref_id": "BIBREF9"
},
{
"start": 155,
"end": 177,
"text": "Aitken and Gold (2013)",
"ref_id": "BIBREF1"
},
{
"start": 361,
"end": 386,
"text": "Bolck and Stamouli (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Studies",
"sec_num": "1.1"
},
{
"text": "The LR framework has been proposed as a means of quantifying the weight of evidence for a variety of forensic evidence, including DNA (Evett and Weir, 1998) , voice Rose, 2002) , finger prints (Neumann et al., 2007) , handwriting (Chen et al., 2018; Hepler et al., 2012) , hair strands (Hoffmann, 1991) , MDMA tablets (Bolck et al., 2009) , evaporated gasoline residual (Vergeer et al., 2014) and earmarks (Champod et al., 2001) . Collected forensic items from known-(e.g. a suspect's known text samples) and questioned-source (e.g. text samples from the offender) can be evaluated by estimating the LR under two competing hypotheses. One specifying the prosecution (or the same-author) hypothesis ( ) , and the other the defence (or the different-author) hypothesis ( ). These are expressed as a ratio of condition probabilities as shown in Equation 1).",
"cite_spans": [
{
"start": 134,
"end": 156,
"text": "(Evett and Weir, 1998)",
"ref_id": "BIBREF22"
},
{
"start": 165,
"end": 176,
"text": "Rose, 2002)",
"ref_id": "BIBREF48"
},
{
"start": 193,
"end": 215,
"text": "(Neumann et al., 2007)",
"ref_id": "BIBREF44"
},
{
"start": 230,
"end": 249,
"text": "(Chen et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 250,
"end": 270,
"text": "Hepler et al., 2012)",
"ref_id": "BIBREF26"
},
{
"start": 286,
"end": 302,
"text": "(Hoffmann, 1991)",
"ref_id": "BIBREF28"
},
{
"start": 318,
"end": 338,
"text": "(Bolck et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 370,
"end": 392,
"text": "(Vergeer et al., 2014)",
"ref_id": "BIBREF55"
},
{
"start": 406,
"end": 428,
"text": "(Champod et al., 2001)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Ratio Framework",
"sec_num": "2"
},
{
"text": "= ( , | ) ( , | ) 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Ratio Framework",
"sec_num": "2"
},
{
"text": "where and are feature values obtained from the known-source and questioned-source respectively. The relative strength of the evidence with respect to the competing hypotheses is reflected in the magnitude of the LR. The more the LR deviates from unity (LR = 1), the greater support for either the (LR > 1) or the (LR < 1). The LR is concerned with the probability of evidence, given the hypothesis (either prosecution or defence), which is in concordance with the role of an expert witness in court, leaving the trier-of-fact to be concerned with the probability of either hypothesis, given the evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Likelihood Ratio Framework",
"sec_num": "2"
},
{
"text": "The two main approaches for estimating LRs, namely the score-and feature-based methods, will be implemented and their performance compared. After the database ( \u00a73.1) and the pre-processing and modelling techniques ( \u00a73.2) are introduced, the two methods are explained in \u00a73.3 and \u00a73.4, respectively, along with their pros and cons. Fusion/calibration techniques and performance metrics are described in \u00a73.5 and \u00a73.6, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Data for the experiments were systematically selected from the Amazon Product Data Authorship Verification Corpus 1 (Halvani et al., 2017) , which contains 21,534 product reviews posted by 3,228 reviewers on Amazon. Many of the reviewers contributed six or more reviews on different topics. Sizes of review texts are equalised to ca. 4 kB, which corresponds to approximately 700 words in length. From the corpus, the authors (= reviewers) who contributed more than six reviews longer than 700 words, were selected as the database for simulating offender vs. suspect comparisons. We decided on six reviews to maximise the number of same-author comparisons possible from the database. This resulted in 2,157 reviewers and a database containing a total of 12,942 review texts. Each review was further equalised to 700 words. The first three reviews of each author were grouped as source-known documents (i.e. suspect documents) and the second three reviews were grouped as source-unknown documents (i.e. offender documents). The total number of word tokens in each group was 2,100, which constitutes a realistic sample size for forensic studies in our casework experience. The database was evenly divided into three mutually exclusive test, background and development sub-databases, each consisting of documents from 719 authors.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Halvani et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Database",
"sec_num": "3.1"
},
{
"text": "The documents stored in the test database were used for assessing the FTC system performance by simulating same-author (SA) and different-author (DA) comparisons. From the 719 authors in the test database, 719 SA comparisons and 516,242 (= 719C2\u00d72) DA comparisons can be simulated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Database",
"sec_num": "3.1"
},
{
"text": "The documents stored in the background database were used differently depending on the method. For the score-based method, they were used to train the score-to-LR conversion model, and in the feature-based method, they were used to assess the typicality of the documents under comparison.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Database",
"sec_num": "3.1"
},
{
"text": "For various reasons, including violation of modelling assumptions and data scarcity, the estimated LRs may not be well calibrated, in which case they cannot be interpreted as the strength of evidence (Morrison, 2013) . A development database is typically used to calibrate the raw LRs via logistic-regression. However, in this study it was found that the LRs derived from the score-based method were well calibrated to begin with; thus logistic-regression calibration was not required. The development database was only used to fuse and calibrate the LRs derived from the feature-based method in this study. A more detailed explanation on logistic regression fusion/calibration is given in \u00a73.5.",
"cite_spans": [
{
"start": 200,
"end": 216,
"text": "(Morrison, 2013)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Database",
"sec_num": "3.1"
},
{
"text": "The type of communication that the current study focuses on is the one-to-many type of communication. Although the selected database is designed specifically for authorship verification tests, it is not a forensic database. To the best of our knowledge, there are no databases available of real forensic messages, nor any specifically designed with forensic conditions in mind. Nevertheless, the database used in this study was judged to be the most appropriate of existing databases to simulate a forensic scenario involving one-to-many communication. The product reviews were written as personal opinions and assessments of a given product addressing a public audience, and the review messages have a clear purpose; conveying one's views to others. So, the content of the messages is focused and topic specific, like the malicious use of the one-to-many type of communication platforms (e.g. the spread of fake news, malicious intent and the defamation of individuals/organisations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Database",
"sec_num": "3.1"
},
{
"text": "The tokens() function from the quanteda library (Benoit et al., 2018) in R (R Core Team, 2017) was used to tokenise the texts with the default settings. That is, all characters were converted to lower case without punctuation marks being removed; punctuation marks are treated as single word tokens. In order to preserve individuating information in author's morpho-syntactic choices (HaCohen-Kerner et al., 2018; Omar and Hamouda, 2020) , no stemming algorithm was applied.",
"cite_spans": [
{
"start": 48,
"end": 69,
"text": "(Benoit et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 384,
"end": 413,
"text": "(HaCohen-Kerner et al., 2018;",
"ref_id": "BIBREF24"
},
{
"start": 414,
"end": 437,
"text": "Omar and Hamouda, 2020)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and Bag of Words Model",
"sec_num": "3.2"
},
{
"text": "The 400 most frequent occurring words in the entire dataset were selected as components for a bag-of-words model. The occurrences of these words were then counted for each document. More specifically, the documents (x, y) under comparison were modelled as the vectors (x = { 1 , 2 \u22ef } and y = { 1 , 2 \u22ef } ) with the word counts ( , \u2208 {1 \u22ef }, \u2208 { , }).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and Bag of Words Model",
"sec_num": "3.2"
},
{
"text": "In the experiments, the size (N) of the bag-ofwords vector is incremented by 5 from N = 5 to N = 20, and then by 20 until N = 400. The 400 most frequent words are sorted according to their frequencies in a descending order. N = 400 was chosen as the cap of the experiments because the experimental results showed the performance ceiling before N = 400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and Bag of Words Model",
"sec_num": "3.2"
},
{
"text": "Measure (Baseline Model)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score-based Method with Distance",
"sec_num": "3.3"
},
{
"text": "Estimating LRs using score-based methods is common in the forensic sciences (Bolck et al., 2015; Chen et al., 2018; Garton et al., 2020; . For score-based methods, the evidence consists of scores, \u2206( , ), which are often measured as the distance between the suspect and offender samples. In this case, the LR can be estimated as the ratio of the two probability densities of the scores under the two competing hypothesis as given in Equation 2).",
"cite_spans": [
{
"start": 76,
"end": 96,
"text": "(Bolck et al., 2015;",
"ref_id": "BIBREF7"
},
{
"start": 97,
"end": 115,
"text": "Chen et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 116,
"end": 136,
"text": "Garton et al., 2020;",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Score-based Method with Distance",
"sec_num": "3.3"
},
{
"text": "= ( , | ) ( , | ) = (\u0394( , )| ) (\u0394( , )| ) 2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Score-based Method with Distance",
"sec_num": "3.3"
},
{
"text": "The probability densities are trained on the scores obtained from the SA and DA comparisons generated from a background database. That is, the probability densities are used as a score-to-LR conversion model. The Cosine distance was used as a baseline in the current study as its superior performance has been previously reported in authorship attribution studies (Evert et al., 2017; Smith and Aldridge, 2011) . The three documents from each group were concatenated as a document of 2,100 words for the score-based method. The count of each word was z-score normalised in order to avoid the most frequent words biasing the estimation of the LRs. The z-score normalised values were used to represent each document in the bag-of-words model described in \u00a73.2.",
"cite_spans": [
{
"start": 364,
"end": 384,
"text": "(Evert et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 385,
"end": 410,
"text": "Smith and Aldridge, 2011)",
"ref_id": "BIBREF52"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Score-based Method with Distance",
"sec_num": "3.3"
},
{
"text": "Score-based methods project the complex, multivariate feature vector into a univariate score space (Morrison and Enzinger, 2018: 47) . Its robustness and ease of implementation for various types of forensic evidence have been reported as benefits (Bolck et al., 2015) . However, information loss is inevitable due to the reduction in dimensionality. Another shortcoming is that score-based methods do not account for the typicality of the evidence. Because of these shortcomings, it is reported that the magnitude of the derived LRs is generally weak (Bolck et al., 2015; . Nevertheless, the approach has been widely studied across a variety of forensic evidence.",
"cite_spans": [
{
"start": 99,
"end": 132,
"text": "(Morrison and Enzinger, 2018: 47)",
"ref_id": null
},
{
"start": 247,
"end": 267,
"text": "(Bolck et al., 2015)",
"ref_id": "BIBREF7"
},
{
"start": 551,
"end": 571,
"text": "(Bolck et al., 2015;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Score-based Method with Distance",
"sec_num": "3.3"
},
{
"text": "Feature-based methods maintain the multivariate structure of the data through estimation of the LR directly from the feature values (Bolck et al., 2015) . This has the potential to prevent information loss but comes at the cost of added model complexity and reduced computational efficiency. Featurebased methods allow the typicality, not only the similarity, of forensic data to be assessed. In feature-based methods, the LR is estimated as a ratio of two conditional probabilities, which express the similarity and typicality of the samples under comparison. These correspond respectively to the numerator and denominator of Equation 1). Similarity, in this context, refers to how similar/different the source-known and source-questioned documents are with respect to their measured properties, and typicality means how typical/atypical they are in the relevant population. In this study a Poisson distribution was used to construct the LR model. The probability mass function for the Poisson distribution is given in Equation 3) and the LR model in Equation 4).",
"cite_spans": [
{
"start": 132,
"end": 152,
"text": "(Bolck et al., 2015)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Method with Poisson Model",
"sec_num": "3.4"
},
{
"text": "( ; ) = \u2212 ! 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Method with Poisson Model",
"sec_num": "3.4"
},
{
"text": "In Equation 3), \u03bb is the shape parameter which indicates the average number of events in the given time interval or space. That is, letting = ( 1, \u22ef ) and = ( 1, \u22ef ) be the counts of a given word for the suspect and offender documents, an LR for the pair of documents is estimated for the word by Equation 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Method with Poisson Model",
"sec_num": "3.4"
},
{
"text": "= ( , | ) ( , | ) = ( | , ) ( | ) = ( | ) ( | ) = \u220f ( | ) =1 \u220f ( | ) =1 = \u220f \u2212 ! =1 \u220f \u2212 ! =1 4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Method with Poisson Model",
"sec_num": "3.4"
},
{
"text": "where the is the mean of and the is the overall mean of the background database. Both the suspect and offender documents consist of three texts; thus = 3 . The second fraction of Equation 4) can be reduced to the third fraction by assuming that the probability of the feature values is independent of whether comes from the same source as or not, and that and are independent if is true. LRs were estimated separately for each of the 400 features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature-based Method with Poisson Model",
"sec_num": "3.4"
},
{
"text": "If the LRs derived separately for the 400 features were independent of one another, they could be multiplied in a na\u00efve Bayesian manner for an over-all LR. However, it is known empirically that independence cannot be assumed (Argamon, 2008; Evert et al., 2017) . This means, they need to be fused instead, taking the correlations into consideration. Fusion enables us to combine and calibrate multiple parallel sets of LRs from different sets of features/models or even different forensic detection systems, with the output being calibrated LRs. Logistic-regression fusion/calibration (Br\u00fcmmer and du Preez, 2006 ) is a commonly used method for LR-based systems. A logistic-regression weight needs to be calculated for each set of LRs, as shown in Equation 5).",
"cite_spans": [
{
"start": 225,
"end": 240,
"text": "(Argamon, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 241,
"end": 260,
"text": "Evert et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 585,
"end": 612,
"text": "(Br\u00fcmmer and du Preez, 2006",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic-Regression Fusion and Calibration",
"sec_num": "3.5"
},
{
"text": "Fused LR = 1 1 + 2 2 + 3 3 + \u22ef + + 5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic-Regression Fusion and Calibration",
"sec_num": "3.5"
},
{
"text": "where, 1, 2 , 3 \u2026 are the LRs of the first through nth set, and 1, 2 , 3 \u2026 are the corresponding logistic-regression weights for scaling. The logistic-regression weight for shifting is b. The weights are obtained from the LRs estimated for the SA and DA comparisons from documents in the development database. The number (N) of features to be fused were incremented by 5 from N = 5 to N = 20, and then by 20 until N = 400.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic-Regression Fusion and Calibration",
"sec_num": "3.5"
},
{
"text": "The same technique can be applied to a single set of LRs, in which case, logistic-regression is used only for calibration. However, it was not applied to the LRs derived with the score-based method as they were well-calibrated to start with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logistic-Regression Fusion and Calibration",
"sec_num": "3.5"
},
{
"text": "The log-LR cost (Cllr), which is a gradient metric based on LR, was used to assess the performance of the FTC systems for the two different models (Baseline and Poisson). The calculation of Cllr is given in Equation 6) (Br\u00fcmmer and du Preez, 2006) ",
"cite_spans": [
{
"start": 219,
"end": 247,
"text": "(Br\u00fcmmer and du Preez, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics: Log-LR Cost",
"sec_num": "3.6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ". = 1 2 ( [ 1 \u2211 2 (1 + 1 ) ] + [ 1 \u2211 2 (1 + )]",
"eq_num": ") 6)"
}
],
"section": "Evaluation Metrics: Log-LR Cost",
"sec_num": "3.6"
},
{
"text": "In Equation 6), and are the number of SA and DA comparisons, and and are the LRs derived from the SA and DA comparisons, respectively. Cllr takes into account the magnitude of the LR values, and assigns them appropriate penalties. In Cllr, LRs that support the counter-factual hypotheses or, in other words, contrary-to-fact LRs (LR < 1 for SA comparisons and LR > 1 for DA comparisons) are heavily penalised and the magnitude of the penalty is proportional to how much the LRs deviate from unity. Optimum performance is achieved when Cllr = 0 and decreases as Cllr approaches and exceeds 1. Thus, the lower the Cllr value, the better the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics: Log-LR Cost",
"sec_num": "3.6"
},
{
"text": "The Cllr measures the overall performance of a system in terms of validity based on a cost function in which there are two main components of loss: discrimination loss (Cllr min ) and calibration loss (Cllr cal ) (Br\u00fcmmer and du Preez, 2006) . The former is obtained after the application of the pooled-adjacent-violators (PAV) transformation -an optimal non-parametric calibration procedure. The latter is obtained by subtracting the former from the Cllr. In this study, besides Cllr, Cllr min and Cllr cal are also referred to.",
"cite_spans": [
{
"start": 213,
"end": 241,
"text": "(Br\u00fcmmer and du Preez, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics: Log-LR Cost",
"sec_num": "3.6"
},
{
"text": "The magnitude of the LRs derived from the comparisons are visually presented using Tippett plots. Details on how to read a Tippett plot are given in \u00a75 when the plots are presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics: Log-LR Cost",
"sec_num": "3.6"
},
{
"text": "The Cllr values are plotted as a function of the number of features, separately for the Baseline model and the Poisson model in Figure 2 . The number of the features is incremented by 5 from N = 5 to N = 20, and then by 20 from N = 20 to N = 400. For example, N = 5 means that the overall LRs were obtained by fusing the LRs derived with the five most-frequently occurring words for the featurebased method. Whereas the scores, which are to be converted to the LRs, were measured based on the vector of the five most-frequent words for the score-based method. As can be observed from Figure 2 , the performance of both models improves en masse as the N increases until a certain N, after which the performance remains relatively unchanged (or falls slightly). The Baseline model's performance stays relatively stable for a higher number of N, while the performance of the Poisson model begins to decline after 180 features. Due to the deterioration with a large number of feature numbers, although the Poisson model outperforms the Baseline model overall, the Baseline model does better with N > 340.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 584,
"end": 592,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "The best performance, however, was observed for the Poisson with a lower number of features (Cllr = 0.26439; N = 180) relative to the Baseline model (Cllr = 0.35682; N = 260). The superior performance of the feature-based method (Poisson model) relative to the score-based method (Baseline model) conforms to the reports of previous studies on other types of evidence (Bolck et al., 2015; .",
"cite_spans": [
{
"start": 368,
"end": 388,
"text": "(Bolck et al., 2015;",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "As described earlier, the Baseline and the Poisson models exhibit different performance characteristics in terms of the number of features required for optimal Cllr and the effect of increasing N. The performance of the Baseline model stays relatively unchanged with more features, while the performance of the Poisson model continuously declines with more features. In order to further investigate this performance difference, the Cllr, Cllr min and Cllr cal values are plotted separately for the two models in Figure 3 . For the Baseline model, it can be seen from Figure 3a that 1) the Cllr cal values consistently remain close to 0, meaning the LRs are very well calibrated regardless of the number of features, and also that 2) the Cllr min values display an almost identical trend as the Cllr values, meaning that like the Cllr values, the discriminability potential remains relatively constant even with an increase in the feature number after the best-performing point. In contrast, the three metrics plotted in Figure 3b reveal some notably different characteristics of the Poisson model. The Cllr cal values stay low only until N = 140~160, after which the Cllr cal values start increasing at a constant rate with an increase in the feature number; that is, the LRs become less well calibrated as N increases beyond 140~160 features. Unlike the calibration loss (and the Baseline model), the discriminability potential, quantified by Cllr min , continues to improve at a small but constant rate, even after N = 180, where the best Cllr was observed. Thus, it is clear from Figure 3 that the deterioration of the Poisson model in performance after N = 180 is not due to a poor discrimination performance but due to a poor calibration performance. As explained in \u00a73.5, logistic-regression fusion/calibration should theoretically yield well calibrated LRs. The poor calibration performance observed for the Poisson model for large feature numbers may be due to the interaction between the dimensions of the LRs to be fused and the amount of the training data for the fusion/calibration weights. This seems to be a typical example of the phenomenon known as the 'curse of dimensionality' (Bellman, 1961: p. 97 ), but further analysis is warranted. Nevertheless, it is clear that the use of a Poisson-based model, which theoretically better suits the distributional pattern of textual data and allows the rarity/typicality of evidence to be considered for LR estimation, can offer performance gains.",
"cite_spans": [
{
"start": 2195,
"end": 2216,
"text": "(Bellman, 1961: p. 97",
"ref_id": null
}
],
"ref_spans": [
{
"start": 512,
"end": 520,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 567,
"end": 584,
"text": "Figure 3a that 1)",
"ref_id": "FIGREF0"
},
{
"start": 1020,
"end": 1029,
"text": "Figure 3b",
"ref_id": "FIGREF2"
},
{
"start": 1583,
"end": 1591,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "For the Poisson model, LRs were first estimated separately for each of the 400 feature words. The resulting LRs were fused by gradually increasing the number of LRs included in the fusion set. LRs were arranged according to word frequency in the experiments reported in \u00a74. Yet, the performance of a given feature (i.e. word) did not always correspond to the frequency of its occurrence. This is illustrated in Table 1 , which lists the ten most frequently occurring words and the ten words with the highest discriminability (i.e. Cllr min ). Thus, in this section, the words were first sorted according to their performance in terms of the Cllr min values, and then the LRs were fused/calibrated based on the sorted words. The Cllr values of the experiments are plotted in Figure 4 including the results presented in Figure 2 for comparison.",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 774,
"end": 782,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 818,
"end": 826,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "5"
},
{
"text": "It is clear from Figure 4 that selecting the features according to their Cllr min values contributes to an improvement in performance for all numbers of features. As a result, the Cllr is lower (0.21664) with less features (N = 140) compared to the results with the unsorted features. This feature selection approach was only possible because the LRs are estimated separately for the each of the 400 different words. This is possibly an advantage for the Poisson model.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "By word frequency",
"sec_num": null
},
{
"text": "The magnitude of the LRs with the best-performing settings are shown on Tippett plots, separately for the Baseline model, the original Poisson model, and the Poisson model with Cllr min -sorted features in Figure 5 . Tippet plots show the cumulative proportion of LRs from the SA comparisons (SALRs), which are plotted rising from the left, as well as of the LRs of the DA comparisons (DALRs), plotted rising from the right. For all Tippett plots, the cumulative proportion of trails is plotted on the y-axis against the log10 LRs on the xaxis. The intersection of the two curves is the equal error rate (EER) which indicates the operating point at which the miss and false alarm rates are equal.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 214,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "By word frequency",
"sec_num": null
},
{
"text": "As the low Cllr cal values indicate, it can also be observed from Figure 5 that the LRs are very well calibrated. However, comparing Figure 5a and Figure 5bc we see that the magnitude of the LRs are weaker overall in the Baseline model compared to the two Poisson models; the Tippet lines are further from unity (log10 LR = 0) for the Poisson models than the Baseline models. Although the overall magnitude of LRs is greater for the Poisson models, unlike the Baseline model, they evince some very strong contrary-to-fact DALRs (which are indicated by arrows in Figure 5 ). This is a concern, and the reason for this needs to be further investigated.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 133,
"end": 142,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 147,
"end": 157,
"text": "Figure 5bc",
"ref_id": "FIGREF4"
},
{
"start": 562,
"end": 570,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "By word frequency",
"sec_num": null
},
{
"text": "A feature-based approach for estimating forensic LRs was implemented with a Poisson model for the first time in LR-based FTC. The results of the experiments showed that the feature-based FTC system outperforms the score-based FTC system with the Cosine distance. It has also been demonstrated that the performance of the feature-based system can be further improved by selecting the sets of LRs to be fused according to their Cllr min values. It was observed that the discrimination loss in the feature-based FTC system reduces as the number of features increases, but becomes less well calibrated with a large number of features. It has been argued that this is a typical case of the 'curse of dimensionality' (Bellman, 1961: p. 97) , but further investigation is required.",
"cite_spans": [
{
"start": 711,
"end": 733,
"text": "(Bellman, 1961: p. 97)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Studies",
"sec_num": "6"
},
{
"text": "A simple one-level Poisson LR model shows good performance. However, it has been reported that word counts are often modelled poorly by standard parametric models such as the Binomial and Poisson models, and some alternatives have been proposed, such as the negative Binomial and the zero-inflated Poisson (Jansche, 2003; Pawitan, 2001 ). Alternatively, a two-level Poisson model might be implemented based if the prior distributions of is assumed (Aitken and Gold, 2013; Bolck and Stamouli, 2017) . These alternatives should be tested to see if any improvements in performance are achievable. The set of features tested in the current study is only one type of many potential authorship attribution features (according to Rudman (1997) , over 1,000 different feature types have so far been proposed in the literature). While the purpose of the present study was to compare modelling approaches, rather than the relative performance of different feature types, an interesting future task would be to explore a richer feature set and the effect of different pre-processing techniques (e.g. stop word removal).",
"cite_spans": [
{
"start": 306,
"end": 321,
"text": "(Jansche, 2003;",
"ref_id": "BIBREF38"
},
{
"start": 322,
"end": 335,
"text": "Pawitan, 2001",
"ref_id": "BIBREF47"
},
{
"start": 448,
"end": 471,
"text": "(Aitken and Gold, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 472,
"end": 497,
"text": "Bolck and Stamouli, 2017)",
"ref_id": "BIBREF9"
},
{
"start": 723,
"end": 736,
"text": "Rudman (1997)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Studies",
"sec_num": "6"
},
{
"text": "The LRs derived using the score-based method were well-calibrated, and therefore logistic-regression calibration was not necessary). This was not the case for LRs using the feature-based method where logistic-regression fusion/calibration was required. This procedure necessitates an extra set of data, namely a development database, and is another shortcoming of the feature-based method applied in this study ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Studies",
"sec_num": "6"
},
{
"text": "http://bit.ly/1OjFRhJ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank the reviewers for their valuable comments. The first author's research is supported by an Australian Government Research Training Program Scholarship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Burrows-Delta method fitness for Arabic text authorship Stylometric detection",
"authors": [
{
"first": "A",
"middle": [
"A"
],
"last": "Abdulrazzaq",
"suffix": ""
},
{
"first": "T",
"middle": [
"K"
],
"last": "Mustafa",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Computer Science and Mobile Computing",
"volume": "3",
"issue": "6",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AbdulRazzaq, A. A. and Mustafa, T. K. (2014) Burrows-Delta method fitness for Arabic text authorship Stylometric detection. International Journal of Computer Science and Mobile Computing 3(6): 69-78.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evidence evaluation for discrete data",
"authors": [
{
"first": "C",
"middle": [
"G G"
],
"last": "Aitken",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gold",
"suffix": ""
}
],
"year": 2013,
"venue": "Forensic Science International",
"volume": "230",
"issue": "1-3",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aitken, C. G. G. and Gold, E. (2013) Evidence evaluation for discrete data. Forensic Science International 230(1-3): 147-155.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Interpreting Burrows's Delta: Geometric and probabilistic foundations",
"authors": [
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2008,
"venue": "Literary and Linguistic Computing",
"volume": "23",
"issue": "2",
"pages": "131--147",
"other_ids": {
"DOI": [
"10.1093/llc/fqn003"
]
},
"num": null,
"urls": [],
"raw_text": "Argamon, S. (2008) Interpreting Burrows's Delta: Geometric and probabilistic foundations. Literary and Linguistic Computing 23(2): 131-147. https://dx.doi.org/10.1093/llc/fqn003",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Standards for the formulation of evaluative forensic science expert opinion",
"authors": [],
"year": 2009,
"venue": "Science & Justice",
"volume": "49",
"issue": "3",
"pages": "161--164",
"other_ids": {
"DOI": [
"10.1016/j.scijus.2009.07.004"
]
},
"num": null,
"urls": [],
"raw_text": "Association of Forensic Science Providers. (2009) Standards for the formulation of evaluative forensic science expert opinion. Science & Justice 49(3): 161-164. https://doi.org/10.1016/j.scijus.2009.07.004",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adaptive Control Processes: A Guided Tour",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Bellman",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bellman, R. E. (1961) Adaptive Control Processes: A Guided Tour. Princeton: Princeton University Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "quanteda: An R package for the quantitative analysis of textual data",
"authors": [
{
"first": "K",
"middle": [],
"last": "Benoit",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nulty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Obeng",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Open Source Software",
"volume": "3",
"issue": "30",
"pages": "774--776",
"other_ids": {
"DOI": [
"10.21105/joss.00774"
]
},
"num": null,
"urls": [],
"raw_text": "Benoit, K., Watanabe, K., Wang, H., Nulty, P., Obeng, A., M\u00fcller, S. and Matsuo, A. (2018) quanteda: An R package for the quantitative analysis of textual data. Journal of Open Source Software 3(30): 774- 776. https://doi.org/10.21105/joss.00774",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluating score-and feature-based likelihood ratio models for multivariate continuous data: Applied to forensic MDMA comparison",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bolck",
"suffix": ""
},
{
"first": "H",
"middle": [
"F"
],
"last": "Ni",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lopatka",
"suffix": ""
}
],
"year": 2015,
"venue": "Law, Probability and Risk",
"volume": "14",
"issue": "3",
"pages": "243--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bolck, A., Ni, H. F. and Lopatka, M. (2015) Evaluating score-and feature-based likelihood ratio models for multivariate continuous data: Applied to forensic MDMA comparison. Law, Probability and Risk 14(3): 243-266.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Likelihood ratios for categorical evidence; Comparison of LR models applied to gunshot residue data",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bolck",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Stamouli",
"suffix": ""
}
],
"year": 2017,
"venue": "Law, Probability and Risk",
"volume": "16",
"issue": "2-3",
"pages": "71--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bolck, A. and Stamouli, A. (2017) Likelihood ratios for categorical evidence; Comparison of LR models applied to gunshot residue data. Law, Probability and Risk 16(2-3): 71-90.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Different likelihood ratio approaches to evaluate the strength of evidence of MDMA tablet comparisons",
"authors": [
{
"first": "A",
"middle": [],
"last": "Bolck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Weyermann",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dujourdy",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Esseiva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Berg",
"suffix": ""
}
],
"year": 2009,
"venue": "Forensic Science International",
"volume": "191",
"issue": "1-3",
"pages": "42--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bolck, A., Weyermann, C., Dujourdy, L., Esseiva, P. and van den Berg, J. (2009) Different likelihood ratio approaches to evaluate the strength of evidence of MDMA tablet comparisons. Forensic Science International 191(1-3): 42-51.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Applicationindependent evaluation of speaker detection",
"authors": [
{
"first": "N",
"middle": [],
"last": "Br\u00fcmmer",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Du Preez",
"suffix": ""
}
],
"year": 2006,
"venue": "Computer Speech and Language",
"volume": "20",
"issue": "2-3",
"pages": "230--275",
"other_ids": {
"DOI": [
"10.1016/j.csl.2005.08.001"
]
},
"num": null,
"urls": [],
"raw_text": "Br\u00fcmmer, N. and du Preez, J. (2006) Application- independent evaluation of speaker detection. Computer Speech and Language 20(2-3): 230-275. https://dx.doi.org/10.1016/j.csl.2005.08.001",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Delta': A measure of stylistic difference and a guide to likely authorship",
"authors": [
{
"first": "J",
"middle": [
"F"
],
"last": "Burrows",
"suffix": ""
}
],
"year": 2002,
"venue": "Literary and Linguistic Computing",
"volume": "17",
"issue": "3",
"pages": "267--287",
"other_ids": {
"DOI": [
"10.1093/llc/17.3.267"
]
},
"num": null,
"urls": [],
"raw_text": "Burrows, J. F. (2002) 'Delta': A measure of stylistic difference and a guide to likely authorship. Literary and Linguistic Computing 17(3): 267-287. https://dx.doi.org/10.1093/llc/17.3.267",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Earmarks as evidence: A critical review",
"authors": [
{
"first": "C",
"middle": [],
"last": "Champod",
"suffix": ""
},
{
"first": "I",
"middle": [
"W"
],
"last": "Evett",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kuchler",
"suffix": ""
}
],
"year": 2001,
"venue": "Journal of Forensic Sciences",
"volume": "46",
"issue": "6",
"pages": "1275--1284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Champod, C., Evett, I. W. and Kuchler, B. (2001) Earmarks as evidence: A critical review. Journal of Forensic Sciences 46(6): 1275-1284.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Assessment of signature handwriting evidence via score-based likelihood ratio based on comparative measurement of relevant dynamic features",
"authors": [
{
"first": "X",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Champod",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "S",
"middle": [
"P"
],
"last": "Shi",
"suffix": ""
},
{
"first": "Y",
"middle": [
"W"
],
"last": "Luo",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": ".",
"middle": [
"."
],
"last": "Lu",
"suffix": ""
},
{
"first": "Q",
"middle": [
"M"
],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Forensic Science International",
"volume": "282",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, X. H., Champod, C., Yang, X., Shi, S. P., Luo, Y. W., Wang, N., . . . Lu, Q. M. (2018) Assessment of signature handwriting evidence via score-based likelihood ratio based on comparative measurement of relevant dynamic features. Forensic Science International 282: 101-110.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Does size matter? Authorship attribution, small samples, big problem. Digital Scholarship in the Humanities",
"authors": [
{
"first": "M",
"middle": [],
"last": "Eder",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "30",
"issue": "",
"pages": "167--182",
"other_ids": {
"DOI": [
"10.1093/llc/fqt066"
]
},
"num": null,
"urls": [],
"raw_text": "Eder, M. (2015) Does size matter? Authorship attribution, small samples, big problem. Digital Scholarship in the Humanities 30(2): 167-182. https://doi.org/10.1093/llc/fqt066",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Understanding and explaining Delta measures for authorship attribution",
"authors": [
{
"first": "S",
"middle": [],
"last": "Evert",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Proisl",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Jannidis",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Reger",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pielstr\u00f6m",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sch\u00f6ch",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vitt",
"suffix": ""
}
],
"year": 2017,
"venue": "Digital Scholarship in the Humanities",
"volume": "32",
"issue": "suppl_2",
"pages": "4--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evert, S., Proisl, T., Jannidis, F., Reger, I., Pielstr\u00f6m, S., Sch\u00f6ch, C. and Vitt, T. (2017) Understanding and explaining Delta measures for authorship attribution. Digital Scholarship in the Humanities 32(suppl_2): ii4-ii16.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Interpreting DNA Evidence : Statistical Genetics for Forensic Scientists",
"authors": [
{
"first": "I",
"middle": [
"W"
],
"last": "Evett",
"suffix": ""
},
{
"first": "B",
"middle": [
"S"
],
"last": "Weir",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evett, I. W. and Weir, B. S. (1998) Interpreting DNA Evidence : Statistical Genetics for Forensic Scientists. Sunderland, Mass.: Sinauer Associates.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Score-based likelihood ratios to evaluate forensic pattern evidence",
"authors": [
{
"first": "N",
"middle": [],
"last": "Garton",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ommen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Niemi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carriquiry",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.09470"
]
},
"num": null,
"urls": [],
"raw_text": "Garton, N., Ommen, D., Niemi, J. and Carriquiry, A. (2020). Score-based likelihood ratios to evaluate forensic pattern evidence. arXiv preprint arXiv:2002.09470. Retrieved on July 20 2020 from https://arxiv.org/abs/2002.09470",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Cross-domain authorship attribution: Author identification using char sequences, word unigrams, and POS-tags features",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Hacohen-Kerner",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yigal",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shayovitz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of Notebook for PAN at",
"volume": "2018",
"issue": "",
"pages": "1--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "HaCohen-Kerner, Y., Miller, D., Yigal, Y. and Shayovitz, E. (2018) Cross-domain authorship attribution: Author identification using char sequences, word unigrams, and POS-tags features. Proceedings of Notebook for PAN at CLEF 2018: 1- 14.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Authorship verification based on compressionmodels",
"authors": [
{
"first": "O",
"middle": [],
"last": "Halvani",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Graner",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.00516"
]
},
"num": null,
"urls": [],
"raw_text": "Halvani, O., Winter, C. and Graner, L. (2017). Authorship verification based on compression- models. arXiv preprint arXiv:1706.00516. Retrieved on 25 June 2020 from http://arxiv.org/abs/1706.00516",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Score-based likelihood ratios for handwriting evidence",
"authors": [
{
"first": "A",
"middle": [
"B"
],
"last": "Hepler",
"suffix": ""
},
{
"first": "C",
"middle": [
"P"
],
"last": "Saunders",
"suffix": ""
},
{
"first": "L",
"middle": [
"J"
],
"last": "Davis",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Buscaglia",
"suffix": ""
}
],
"year": 2012,
"venue": "Forensic Science International",
"volume": "219",
"issue": "1-3",
"pages": "129--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hepler, A. B., Saunders, C. P., Davis, L. J. and Buscaglia, J. (2012) Score-based likelihood ratios for handwriting evidence. Forensic Science International 219(1-3): 129-140.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Statistical evaluation of the evidential value of human hairs possibly coming from multiple sources",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 1991,
"venue": "Journal of Forensic Sciences",
"volume": "36",
"issue": "4",
"pages": "1053--1058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoffmann, K. (1991) Statistical evaluation of the evidential value of human hairs possibly coming from multiple sources. Journal of Forensic Sciences 36(4): 1053-1058.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Delta prime? Literary and Linguistic Computing",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Hoover",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "19",
"issue": "",
"pages": "477--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoover, D. L. (2004a) Delta prime? Literary and Linguistic Computing 19(4): 477-495.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Testing Burrows's Delta",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Hoover",
"suffix": ""
}
],
"year": 2004,
"venue": "Literary and Linguistic Computing",
"volume": "19",
"issue": "4",
"pages": "453--475",
"other_ids": {
"DOI": [
"10.1093/llc/19.4.453"
]
},
"num": null,
"urls": [],
"raw_text": "Hoover, D. L. (2004b) Testing Burrows's Delta. Literary and Linguistic Computing 19(4): 453-475. https://dx.doi.org/10.1093/llc/19.4.453",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A likelihood ratio-based evaluation of strength of authorship attribution evidence in SMS messages using N-grams",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ishihara",
"suffix": ""
}
],
"year": 2014,
"venue": "International Journal of Speech Language and the Law",
"volume": "21",
"issue": "1",
"pages": "23--50",
"other_ids": {
"DOI": [
"10.1558/ijsll.v21i1.23"
]
},
"num": null,
"urls": [],
"raw_text": "Ishihara, S. (2014) A likelihood ratio-based evaluation of strength of authorship attribution evidence in SMS messages using N-grams. International Journal of Speech Language and the Law 21(1): 23- 50. http://dx.doi.org/10.1558/ijsll.v21i1.23",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Strength of forensic text comparison evidence from stylometric features: A multivariate likelihood ratio-based analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ishihara",
"suffix": ""
}
],
"year": 2017,
"venue": "The International Journal of Speech, Language and the Law",
"volume": "24",
"issue": "1",
"pages": "67--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ishihara, S. (2017a) Strength of forensic text comparison evidence from stylometric features: A multivariate likelihood ratio-based analysis. The International Journal of Speech, Language and the Law 24(1): 67-98.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Strength of linguistic text evidence: A fused forensic text comparison system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ishihara",
"suffix": ""
}
],
"year": 2017,
"venue": "Forensic Science International",
"volume": "278",
"issue": "",
"pages": "184--197",
"other_ids": {
"DOI": [
"10.1016/j.forsciint.2017.06.040"
]
},
"num": null,
"urls": [],
"raw_text": "Ishihara, S. (2017b) Strength of linguistic text evidence: A fused forensic text comparison system. Forensic Science International 278: 184-197. https://doi.org/10.1016/j.forsciint.2017.06.040",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Improving Burrows' Delta. An empirical evaluation of text distance measures",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jannidis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pielstr\u00f6m",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Sch\u00f6ch",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vitt",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of Digital Humanities",
"volume": "2015",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jannidis, F., Pielstr\u00f6m, S., Sch\u00f6ch, C. and Vitt, T. (2015) Improving Burrows' Delta. An empirical evaluation of text distance measures. Proceedings of Digital Humanities 2015: 1-10.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Parametric models of linguistic count data",
"authors": [
{
"first": "M",
"middle": [],
"last": "Jansche",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "288--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jansche, M. (2003) Parametric models of linguistic count data. Proceedings of Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics: 288-295.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Tutorial on logistic-regression calibration and fusion: Converting a score to a likelihood ratio",
"authors": [
{
"first": "G",
"middle": [
"S"
],
"last": "Morrison",
"suffix": ""
}
],
"year": 2013,
"venue": "Australian Journal of Forensic Sciences",
"volume": "45",
"issue": "2",
"pages": "173--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrison, G. S. (2013) Tutorial on logistic-regression calibration and fusion: Converting a score to a likelihood ratio. Australian Journal of Forensic Sciences 45(2): 173-197.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Score based procedures for the calculation of forensic likelihood ratios -Scores should take account of both similarity and typicality",
"authors": [
{
"first": "G",
"middle": [
"S"
],
"last": "Morrison",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Enzinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Science & Justice",
"volume": "58",
"issue": "1",
"pages": "47--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrison, G. S. and Enzinger, E. (2018) Score based procedures for the calculation of forensic likelihood ratios -Scores should take account of both similarity and typicality. Science & Justice 58(1): 47-58.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Forensic speech science",
"authors": [
{
"first": "G",
"middle": [
"S"
],
"last": "Morrison",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Enzinger",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Expert Evidence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morrison, G. S., Enzinger, E. and Zhang, C. (2018) Forensic speech science. In I. Freckelton and H. Selby (eds.), Expert Evidence. Sydney, Australia: Thomson Reuters.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Computation of likelihood ratios in fingerprint identification for configurations of any number of minutiae",
"authors": [
{
"first": "C",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Champod",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Puch-Solis",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Egli",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Anthonioz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bromage-Griffiths",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Forensic Sciences",
"volume": "52",
"issue": "1",
"pages": "54--64",
"other_ids": {
"DOI": [
"10.1111/j.1556-4029.2006.00327.x"
]
},
"num": null,
"urls": [],
"raw_text": "Neumann, C., Champod, C., Puch-Solis, R., Egli, N., Anthonioz, A. and Bromage-Griffiths, A. (2007) Computation of likelihood ratios in fingerprint identification for configurations of any number of minutiae. Journal of Forensic Sciences 52(1): 54- 64. https://dx.doi.org/10.1111/j.1556- 4029.2006.00327.x",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The effectiveness of stemming in the stylometric authorship attribution in Arabic",
"authors": [
{
"first": "A",
"middle": [],
"last": "Omar",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hamouda",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Advanced Computer Science and Applications",
"volume": "11",
"issue": "1",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar, A. and Hamouda, W. (2020) The effectiveness of stemming in the stylometric authorship attribution in Arabic. International Journal of Advanced Computer Science and Applications 11(1): 116-121.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "All Likelihood : Statistical Modelling and Inference Using Likelihood",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Pawitan",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pawitan, Y. (2001) In All Likelihood : Statistical Modelling and Inference Using Likelihood. Oxford: Oxford University Press.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Forensic Speaker Identification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rose, P. (2002) Forensic Speaker Identification. London: Taylor & Francis.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "The state of authorship attribution studies: Some problems and solutions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rudman",
"suffix": ""
}
],
"year": 1997,
"venue": "Computers and the Humanities",
"volume": "31",
"issue": "4",
"pages": "351--365",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudman, J. (1997) The state of authorship attribution studies: Some problems and solutions. Computers and the Humanities 31(4): 351-365.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Deeper Delta across genres and languages: Do we really need the most frequent words?",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rybicki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Eder",
"suffix": ""
}
],
"year": 2011,
"venue": "Literary and Linguistic Computing",
"volume": "26",
"issue": "3",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rybicki, J. and Eder, M. (2011) Deeper Delta across genres and languages: Do we really need the most frequent words? Literary and Linguistic Computing 26(3): 315-321. https://dx.doi.org/0.1093/llc/fqr031",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Improving authorship attribution: Optimizing Burrows' Delta method",
"authors": [
{
"first": "P",
"middle": [
"W H"
],
"last": "Smith",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Aldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Quantitative Linguistics",
"volume": "18",
"issue": "1",
"pages": "63--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smith, P. W. H. and Aldridge, W. (2011) Improving authorship attribution: Optimizing Burrows' Delta method. Journal of Quantitative Linguistics 18(1): 63-88.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "How similar are Heimskringla and Egils saga? An application of Burrows' delta to Icelandic texts",
"authors": [
{
"first": "H",
"middle": [],
"last": "\u00deorgeirsson",
"suffix": ""
}
],
"year": 2018,
"venue": "European Journal of Scandinavian Studies",
"volume": "48",
"issue": "1",
"pages": "1--18",
"other_ids": {
"DOI": [
"10.1515/ejss-2018-0001"
]
},
"num": null,
"urls": [],
"raw_text": "\u00deorgeirsson, H. (2018) How similar are Heimskringla and Egils saga? An application of Burrows' delta to Icelandic texts. European Journal of Scandinavian Studies 48(1): 1-18. https://doi.org/10.1515/ejss- 2018-0001",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Likelihood ratio methods for forensic comparison of evaporated gasoline residues",
"authors": [
{
"first": "P",
"middle": [],
"last": "Vergeer",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bolck",
"suffix": ""
},
{
"first": "L",
"middle": [
"J C"
],
"last": "Peschier",
"suffix": ""
},
{
"first": "C",
"middle": [
"E H"
],
"last": "Berger",
"suffix": ""
},
{
"first": "J",
"middle": [
"N"
],
"last": "Hendrikse",
"suffix": ""
}
],
"year": 2014,
"venue": "Science & Justice",
"volume": "54",
"issue": "6",
"pages": "401--411",
"other_ids": {
"DOI": [
"10.1016/j.scijus.2014.04.008"
]
},
"num": null,
"urls": [],
"raw_text": "Vergeer, P., Bolck, A., Peschier, L. J. C., Berger, C. E. H. and Hendrikse, J. N. (2014) Likelihood ratio methods for forensic comparison of evaporated gasoline residues. Science & Justice 54(6): 401-411. https://dx.doi.org/10.1016/j.scijus.2014.04.008",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Histograms showing the distributional patterns of the counts of three words from the database; 'a', 'not' and 'they' for Panel a), b) and c), respectively. They are the 10 th , 25 th and 38 th most frequently-occurring words in the database."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The Cllr values of the LRs with the N number of features indicated in the y-axis are plotted separately for the Baseline and the Poisson models. The features are sorted according to the frequencies of the words. The large circles indicate the best Cllr values for the models."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The Cllr, Cllr min and Cllr cal values of the LRs, with the N number of features indicated in the y-axis, are plotted separately for the Baseline (Panel a) and the Poisson (Panel b) models. The features are sorted according to word frequency. The vertical solid line indicates where the best Cllr value was obtained."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The Cllr values of the (fused) LRs with the N number of Cllr min -sorted features indicated in the y-axis are plotted together with the results presented inFigure 2for comparisons. The large circles indicate the best Cllr values for the models."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Tippett plots showing the magnitude of the derived LRs. Panel a) = Best-performing Baseline model; Panel b) = Best-performing original Poisson model; Panel c) = Best-performing Poisson model with sorted features according to their Cllr min values. Note that some LRs extend beyond \u00b115 of the y-axis. Arrows indicate very strong contrary-to-fact DALRs."
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>: Ten most-frequent (left) and lowest-Cllr min</td></tr><tr><td>(right) words</td></tr></table>"
}
}
}
}