|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:58:02.195983Z" |
|
}, |
|
"title": "\"Shakespeare in the Vectorian Age\" -An evaluation of different word embeddings and NLP parameters for the detection of Shakespeare quotes", |
|
"authors": [ |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Liebl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Computational Humanities Group Leipzig University", |
|
"institution": "", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Burghardt", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Computational Humanities Group Leipzig University", |
|
"institution": "", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we describe an approach for the computer-aided identification of Shakespearean intertextuality in a corpus of contemporary fiction. We present the Vectorian, which is a framework that implements different word embeddings and various NLP parameters. The Vectorian works like a search engine, i.e. a Shakespearean phrase can be entered as a query, the underlying collection of fiction books is then searched for the phrase and the passages that are likely to contain the phrase, either verbatim or as a paraphrase, are presented in a ranked results list. While the Vectorian can be used via a GUI, in which many different parameters can be set and combined manually, in this paper we present an ablation study that automatically evaluates different embedding and NLP parameter combinations against a ground truth. We investigate the behavior of different parameters during the evaluation and discuss how our results may be used for future studies on the detection of Shakespearean intertextuality.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we describe an approach for the computer-aided identification of Shakespearean intertextuality in a corpus of contemporary fiction. We present the Vectorian, which is a framework that implements different word embeddings and various NLP parameters. The Vectorian works like a search engine, i.e. a Shakespearean phrase can be entered as a query, the underlying collection of fiction books is then searched for the phrase and the passages that are likely to contain the phrase, either verbatim or as a paraphrase, are presented in a ranked results list. While the Vectorian can be used via a GUI, in which many different parameters can be set and combined manually, in this paper we present an ablation study that automatically evaluates different embedding and NLP parameter combinations against a ground truth. We investigate the behavior of different parameters during the evaluation and discuss how our results may be used for future studies on the detection of Shakespearean intertextuality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Shakespeare is everywhere. Intertextual references to the works of the eternal bard can be found across all temporal and medial boundaries, making him not only the most cited and most performed author of all time, but also the most studied author in the world (Garber, 2005; Maxwell and Rumbold, 2018) . But even though countless studies on Shakespearean intertextuality have examined individual aspects of his work by means of close reading, there is still no overview, no big picture, no systematic map of intertextual Shakespeare references for larger text corpora. It is also striking that up to now hardly any computational approaches have been used to detect Shakespeare references on a larger scale. This is all the more surprising as there are many methods in the fields of computer science and natural language processing for determining the similarity between texts (B\u00e4r et al., 2012) , which actually can be seen as a formal definition of intertextuality.", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 274, |
|
"text": "(Garber, 2005;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 301, |
|
"text": "Maxwell and Rumbold, 2018)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 876, |
|
"end": 894, |
|
"text": "(B\u00e4r et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We acknowledge that the full range of intertextual phenomena cannot be covered by mere means of text similarity determination. For our understanding of intertextuality we therefore refer to the definition of G\u00e9rard Genette, who defines it as \"the effective presence of one text in another text\" 1 (Genette, 1993) , where we understand the effective presence of one text in another to be a more or less objectively recognizable, explicit reference on the surface of the text. Thus, our approach will not be able to detect highly implicit and indirect references that require a lot of domain knowledge and context. The following variant of a well-known quotation from Macbeth (Shakespeare's original variant is given in square brackets) would, however, be objectively recognizable from the text and clearly classified as an intertextual reference: By the stinking [pricking] of my nose [thumbs] , something evil [wicked] this way goes [comes] . (Terry Pratchett: \"I Shall Wear Midnight\").", |
|
"cite_spans": [ |
|
{ |
|
"start": 297, |
|
"end": 312, |
|
"text": "(Genette, 1993)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 884, |
|
"end": 892, |
|
"text": "[thumbs]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 910, |
|
"end": 918, |
|
"text": "[wicked]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 933, |
|
"end": 940, |
|
"text": "[comes]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to identify such objectively recognizable references in an automated way, we present an approach that investigates the potential of word embeddings (Mikolov et al., 2013) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 179, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "in combination with", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "1 Original quote: \"la pr\u00e9sence effective d'un text dans un autre\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "in combination with", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "other related parameters (e.g., weighting based on POS types, order of POS types, etc.). As we are aware that different word embeddings and NLP parameters will influence the results in very specific ways, we present an ablation study in which we systematically explore the effects of different parameter combinations. We hope that our evaluation will shed some more light on the role of different embeddings and NLP parameters for the detection of intertextuality in the sense of Molnar's desideratum of \"interpretable machine learning\" (Molnar, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 537, |
|
"end": 551, |
|
"text": "(Molnar, 2020)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "in combination with", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While text reuse detection (Agirre et al., 2016; B\u00e4r et al., 2012) mainly finds application in the context of plagiarism detection and the identification of duplicate websites, there are also productive applications in the digital humanities. One example can be found in the project Digital Breadcrumbs of Brothers Grimm (Franzini et al., 2017) , where computational text reuse methods are used to detect motifs of fairy tales across different languages and versions. Labb\u00e9 and Labb\u00e9 (2005) present a tool in the intersection of stylometry and text reuse, as they use intertextual distance to classify texts from French literature. Ganascia et al. (2014) describe an approach for the automatic detection of textual reuses in different works of Balzac and his contemporaries . Apart from these example studies, the majority of existing research on text reuse in the Digital Humanities can be located in the field of historical languages and classic studies (Bamman and Crane, 2008; B\u00fcchler et al., 2013; Coffee et al., 2012a; Coffee et al., 2012b; Forstall et al., 2015; Scheirer et al., 2014) While clearly there has been interesting work on the problem of text reuse and intertextuality detection in various areas of the digital humanities, there are only very few studies that use computational methods to detect Shakespeare quotes (Burghardt et al., 2019; Hohl-Trillini, 2019; Molz, 2019) . With this paper we contribute to computational intertextuality detection in Shakespeare studies by exploring a set of parameters that can enhance approaches to searching references based on word embeddings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 48, |
|
"text": "(Agirre et al., 2016;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 66, |
|
"text": "B\u00e4r et al., 2012)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 321, |
|
"end": 344, |
|
"text": "(Franzini et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 490, |
|
"text": "Labb\u00e9 and Labb\u00e9 (2005)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 632, |
|
"end": 654, |
|
"text": "Ganascia et al. (2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 956, |
|
"end": 980, |
|
"text": "(Bamman and Crane, 2008;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 981, |
|
"end": 1002, |
|
"text": "B\u00fcchler et al., 2013;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1003, |
|
"end": 1024, |
|
"text": "Coffee et al., 2012a;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1025, |
|
"end": 1046, |
|
"text": "Coffee et al., 2012b;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1047, |
|
"end": 1069, |
|
"text": "Forstall et al., 2015;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1070, |
|
"end": 1092, |
|
"text": "Scheirer et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1334, |
|
"end": 1358, |
|
"text": "(Burghardt et al., 2019;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1359, |
|
"end": 1379, |
|
"text": "Hohl-Trillini, 2019;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1380, |
|
"end": 1391, |
|
"text": "Molz, 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 System design: Introducing \"The Vectorian\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Vectorian 2 is a high-performance sentence alignment search engine 3 designed around a number of explicit parameters that model different approaches to scoring sentence similarities. The search engine is also accessible via an internal batch interface for doing large hyperparameter searches, as is the case for the study presented in this paper. Throughout the rest of the section we describe the various parameters of the Vectorian step by step. An overview of the architecture is shown in Figure 1 . In a preprocessing step (top half of Figure 1 ), we first split the search corpus into sentences and detect POS tags for each token of the sentences. Since the corpus in which we search for Shakespeare quotations consists entirely of contemporary literature, the use of spaCy 2.3.2 and the en core web lg-2.3.1 is unproblematic. When running a query -i.e. specific Shakespeare quotes -we run the same preprocessing on the query text but skip sentence splitting and assume a single sentence. We are aware that the selected spaCy model is not optimal for Shakespeare quotes because it does not reproduce details of Early Modern English. However, having looked at a number of samples and the assigned POS tags, we think it works good enough for this first pilot study. We plan to implement a language model that is more specific to Shakespeare as a future step. Next, the Vectorian computes similarities between tokens from the query and the search corpus based on precomputed contemporary word embeddings like fasttext. At this point, only word tokens are used and punctuation is ignored completely. For the Shakespearean text in the query, contemporary word embeddings pose an obvious challenge due to shifts in word meaning. We currently do not leverage historical word embeddings like HistWords (Hamilton et al., 2016) , but plan to incorporate these in future extensions of this work. After preprocessing and storing the data in an efficient in-memory format suitable for high-performance realtime searches over a large corpus (see bottom right in Figure 1 ), we compute alignments based on similarity scores between the tokens (see bottom left in Figure 1 ). Ultimately, scores are derived from the embeddings and controlled by various parameters -the details are contained in a pipeline we refer to as SIM FULL (see big box in Figure 1 ) and which will be described in more detail in the next sections (see the right half of Figure 2 ). Given the token similarity scores, we find optimal alignments on the sentence level using the Waterman-Smith-Beyer algorithm (Waterman et al., 1976 ), which we leverage through an optimized and highly customizable implementation 4 . In the following we provide details on the ten parameters that are shown in Figures 1 and 2. These parameters tackle three different areas of similarity measures we found worth considering. (1) Three parameters (XDT, SPW, PMP) are concerned with how exactly part of speech (POS) tags contribute to the similarity computation. (2) Five parameters (EMI, ESM, IFS, SIF, SIT) are concerned with how to exactly compute a scalar similarity score from word embeddings. (3) Finally, two parameters (MLP, SBO) control details of how alignments are scored.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1803, |
|
"end": 1826, |
|
"text": "(Hamilton et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 2573, |
|
"end": 2595, |
|
"text": "(Waterman et al., 1976", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 504, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 552, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2057, |
|
"end": 2065, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2157, |
|
"end": 2165, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2338, |
|
"end": 2346, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 2436, |
|
"end": 2444, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Exclude Determiners (XDT). A Boolean parameter. If enabled, it will perform a search as if all tokens in query and corpus that have been tagged with the universal POS tag 5 DET have been removed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for POS Tag Influence", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A numeric parameter between 0 and 1. If set to 0, the similarity between two tokens is directly computed from the configured embedding metrics. If set to 1, token pair scores are weighted with the corpus token's PennTree POS tag (Taylor et al., 2003) using the weights given by Batanovi\u0107 and Boji\u0107 (Batanovi\u0107 and Boji\u0107, 2015) . As a result, and following the argumentation of Batanovi\u0107 and Boji\u0107, some tokens (e.g. VBP) will have a greater influence on the final similarity scores than others (e.g. NN). If w is a token's weight according to Batanovi\u0107 and Boji\u0107, we compute an overall token weight as (1 SPW ) + (SPW \u21e5 w). Therefore, reducing SPW will gradually equalize these weights.", |
|
"cite_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 250, |
|
"text": "(Taylor et al., 2003)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 325, |
|
"text": "Batanovi\u0107 and Boji\u0107 (Batanovi\u0107 and Boji\u0107, 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic POS Weighting (SPW).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A numeric parameter between 0 and 1 that penalizes the similarity scores of token pairs if their universal POS tags do not match -i.e. giving tokens a lower score, even if the embedding considers them very similar. If set to 1, any POS mismatch will reduce the similarity score to 0, regardless of the embedding score. A value of 0 completely ignores POS mismatches. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "POS Mismatch Penalty (PMP).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Embedding Interpolation (EMI). A numeric parameter between 0 and 1 that specifies a mixing of two embeddings. For our experiments, we use the official pretrained fasttext embeddings (Mikolov et al., 2018) and wnet2vec (Saedi et al., 2018) embeddings 6 . These two candidates were chosen as typical proponents of very different kinds of precomputed word embeddings: whereas fasttext is an established iteration of the word2vec school that are trained on unstructured corpora, wnet2vec is based on \"ontological graphs\" (Saedi et al., 2018) , namely WordNet. By combining these embeddings, we hope to investigate if combining very different approaches can yield a benefit. Our mixing computes a maximum similarity: if t is the value for EMI, we compute the mixed similarity s 0 from two original similarities s 1 and s 2 as s 0 = max(2s 1 (1 t), 2s 2 t). Therefore, a value of 0 indicates that only fasttext scores are used, whereas a value of 0.5 indicates that for each token pair, the maximum similarity found in either embedding is used.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 204, |
|
"text": "(Mikolov et al., 2018)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 537, |
|
"text": "(Saedi et al., 2018)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Embedding Similarity Measure (ESM). Specifies how two word vectors from an embedding are turned into a scalar similarity score. The Vectorian supports three strategies: cosine similarity, the noniterative contextual dissimilarity measure by Jegou et al. (Jegou et al., 2010 ) with a neighborhood size of 100 elements and finally the rank-based similarity metric by Santus et al. (Santus et al., 2018) . We refer to these strategies as cosine, nicdm and apsynp respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 273, |
|
"text": "(Jegou et al., 2010", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 379, |
|
"end": 400, |
|
"text": "(Santus et al., 2018)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Inverse Frequency Scaling (IFS). A numeric parameter between 0 and 1 that weights similarity scores with a token's inverse probability of occurence in a typical corpus. If set to to 0, no such weighting takes place, if set to 1, the similarity score for rarer words will get boosted. Specifially, if p is a token's negative log probability and s is the similarity, a new similarity score s 0 is computed as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "s 0 = s \u21e4 ( p) IFS (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This is a rather simplistic approach, as we currently do not model approaches such as tf-idf (Leskovec et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 116, |
|
"text": "(Leskovec et al., 2020)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarity Falloff (SIF). A numeric parameter between 0 and 1 that rescales similarity scores before POS weighting. This can help to increase the distance between high and low scores. Each similarity score s is rescaled to s SIF . A value of 1 obviously disables rescaling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Similarity Threshold (SIT). A numeric threshold between 0 and 1 that is applied to similarity scores after POS weighting. Any score below this value will be set to 0 for further processing. This has the effect of reducing noise from unwanted low similarities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Embedding Similarity Computation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Mismatch Length Penalty (MLP). An integer value indicating that length of a mismatch -in number of tokens -that will reduce the similarity score by 0.5 -the maximum possible score being 1. Low values will enforce no or only short mismatches, whereas higher values allow longer runs of mismatching tokens. The score penalty is modelled as an exponential function. For a mismatch of length n we compute the penalty as 1 2 ( n MLP )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Submatch Boosting (SBO). A numeric parameter that models the score if only parts of the query get matched. Specifially, if a query contains n tokens of which m have been matched, and each individual token has a maximum score of 1 and the sum of matched token scores is s, then we compute an overall score s 0 using a discount factor \u21b5 as follows 7 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\u21b5 = \u2713 n m n \u25c6 SBO", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "(3)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s 0 = s m + \u21b5(n m)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "A value of 0 therefore indicates that no special submatch weighting takes place. Values larger than 0 decrease the impact of non-matched tokens in the overall scores, thereby making partial matches obtain higher scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameters for Alignment Scoring", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section we present an ablation study in which we automatically test different combinations of the parameters that were described in the previous section, in order to investigate how they influence the results. The ground truth required to carry out such an evaluation was derived from Molz (2019), who conducted a comprehensive study to identify references to Shakespeare in a corpus of postmodern fiction using a mixture of close and distant reading. We took a subsample of this work, which contains 73 quotes from one of Shakespeare's most popular plays: Hamlet. The rationale for taking only a subsample is that the ground truth cannot be considered 100% comprehensive, as was shown in related studies with the dataset (Bryan et al., 2020) . We kept the sample size small in order to simplify the task of recognizing new true positives identified by the Vectorian. The 73 quotes are distributed among 31 novels that have a total size of 4,2 million tokens. Sticking to the search engine metaphor introduced in Section 2, we will treat each of the 73 Hamlet quotes as a query that is searched for in the collection of novels. In the evaluation study, each query is assigned an unranked set of expected results (as documented in the ground truth) in the corpus of novels. Each result refers to one specific sentence in a novel. Some queries have multiple expected results, e.g. for \"There are more things in heaven and earth ...\" our ground truth records 8 occurences in different novels. However, most queries have only 1 or 2 matches in our ground 0.0 0.2 0.4 0.6 0.8 1.0 base of value of x k 10 \u22126 10 \u22124 10 \u22122 increase in mean Figure 3 : Scatter plot of absolute change in harmonic mean (blue) and arithmetic mean (orange) over random scores x 1 , ..., x n when updating a single input score x k by an improvement \u270f of 0.1. Y axis is logarithmic. The arithmetic mean always increases by a constant \u270f n regardless of x k 's value, so in an optimizer it encourages increasing low and high scores similarly. The harmonic mean on the other hand tends to weigh the same improvement in a low input (left) considerably higher than in a high input (right) and therefore encourages increasing low inputs. truth 8 . In total, our ground truth contains 149 result sentences for 73 unique queries. We measured the performance of each query by computing the Discounted Cumulative Gain (DCG) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) against the unranked 9 ground truth for the first 50 results retrieved. DCG is a standard measure to rank the quality of a result set in information retrieval, where documents from higher ranks contribute more to the overall gain and documents at lower ranks contribute less, i.e. they are discounted.", |
|
"cite_spans": [ |
|
{ |
|
"start": 730, |
|
"end": 750, |
|
"text": "(Bryan et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 2390, |
|
"end": 2421, |
|
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1639, |
|
"end": 1647, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation design", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Since a full grid search was not feasible for our parameter space, we ran an Optuna (Akiba et al., 2019) optimizer with the objective of maximizing a total performance score. This total performance score is computed as a mean over the DCGs of all queries. However, since some queries expect more results than others -and therefore the ideally obtainable DCGs for different queries vary -such a composite score only makes sense if all contributing DCGs have been scaled to a fixed range. Therefore we employed the commonly used formulation of Normalized DCGs (nDCGs) (Manning et al., 2008) to normalize the score for each query into the range between 0 and 1. To summarize these considerations: we used nDCG@50 for each query and then computed a mean to obtain a total performance score. We ran a first optimization with the objective of maximizing the commonly used arithmetic mean over all query nDCGs as the maximizing objective, and a second independent optimization with the objective of maximizing a harmonic mean of query nDCGs. The harmonic mean may seem like an unusual choice here, as its use in information retrieval is typically limited to the F-score (Manning et al., 2008) . The reason we use it in this scenario, is the distribution of query difficulty in our ground truth and the specific characteristics of the harmonic mean. We found a very strong negative skew due to a high number of rather simplistic queries in our data. There are about 50% of queries that relate to verbatim or nearverbatim quotes of text from Shakespeare, which means that these are rather easy to detect from a text reuse perspective. About 10% of the queries on the other hand rely on implicit knowledge and are probably not easily found by the Vectorian, which relies entirely on explicit language features. The remaining 40% of the queries are neither trivial nor out of reach for the Vectorian. We therefore put them into the category hard but feasible. Optimizing on the arithmetic mean carries the risk of micro-optimizing the bulk of easy queries to an nDCG of 100%, but finding no good nDCGs for the few but more interesting queries. In order to encourage good nDCGs for hard but feasible queries, the harmonic mean seems to be a reasonable alternative. As Figure 3 illustrates, it tends to improve by higher values when low inputs get increased.", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 104, |
|
"text": "(Akiba et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 566, |
|
"end": 588, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1163, |
|
"end": 1185, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2256, |
|
"end": 2264, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation design", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our evaluation study, we ultimately ran optimizations for both types of means with 1,500 trials using a default configuration with tree-structured Parzen estimators (Akiba et al., 2019) . As a caveat, it must be noted that due to the small size of our ground truth sample, our evaluation did not have a dedicated validation set. Since the parameters in our system are few and quite restricted, however, we believe the risk of overfitting is rather low. We interpret our results as a simplistic model that represents deeper characteristics of the query-result relationships given in our ground truth.", |
|
"cite_spans": [ |
|
{ |
|
"start": 168, |
|
"end": 188, |
|
"text": "(Akiba et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation design", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The best configuration found with Optuna produced nDCGs of 77.6% and 75.2% for the arithmetic (A) and harmonic (H) mean optimization objectives respectively. Since the decision what constitutes a quotation in some cases cannot be made on the language level alone and thus is highly subjective (Molz, 2019) , we believe that these scores can be interpreted as a fairly good performance. The system also produced a small number of new true positive matches, which could be confirmed to be valid 10 . The specific parameter values for the best configurations are given in Table 1 , together with the parameter domains that were searched. Note that Optuna does not use an initial starting or seeding configuration. Unfortunately, due to the skewed nature of the query problem in our ground truth, the mean value of the nDCGs tells us only little about the performance of the two variants with respect to different types of queries. Figure 4 therefore gives a detailed histogram of the query nDCGs. As argued in the evaluation design, the distribution of the hard but feasible queries with low scores turned out differently indeed. Most notably for H, we see a salient group of (orange) queries scoring between 0.4 and 0.7, while the queries scoring at nDCG 0 and 1 both have been slightly diminished. In general, this is what we hoped for. The downside however is that the beneficial arithmetic (blue) peak between 0.7 and 0.8 is now gone, i.e. we have lost this score for some queries. Figure 5 shows the performance of both variants in terms of quantiles. Both variants operate optimally on the maximum 100% nDCG level for easy queries that are located at the quantiles above 0.5. Between the 0.15 and 0.5 quantiles however, the arithmetic mean variant performs better. On the other hand, the H variant does not show the dip below the 0.15 quantile, which seems to give it slightly better performance for some difficult queries. We now discuss the parameters' importance by performing an ablation study on each of them, starting with the harmonic variant (see Figure 6 ), which shows a surprising combination 11 . EMI and PMP have no effect at all -any value produces the same optimal results -and nearly the same is true for ESM. In other words, the whole embedding pipeline seems to be irrelevant for the search results. Furthermore, SPW, SIF, and IFS are all basically no-ops. The only three salient choices are a SIT above 0.8, a MLP of 1 -that shows an interesting option for extending it up to 4 -and a slightly elevated SBO value. In summary, large parts of the Vectorian engine have been turned off in this case in order to facilitate a specific kind of search, namely looking for alignments of tokens without using any POS or embedding information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 305, |
|
"text": "(Molz, 2019)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 569, |
|
"end": 576, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 936, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 1483, |
|
"end": 1491, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 2058, |
|
"end": 2066, |
|
"text": "Figure 6", |
|
"ref_id": "FIGREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Semantic POS Weighting (SPW) In contrast to the results for H, an ablation on the A variant shows more intertwined settings (see Figure 7) . We only see one no-op with SIF, all other parameters are meaningful. SIT, MLP and SBO are somewhat similar to the H variant. The embedding and POS parameters are quite different. SPW has its maximum benefit between 0.4 and 0.5, meaning it should neither be turned fully on nor off 12 . The plot for PMP suggests that a POS mismatch should always override the computed similarity from an embedding and count that token pair as not similar. For EMI, we observe that the best value is not 1, as inferred in the Optuna search, but 0.55. This seems to confirm our assumption that mixing very different types of embedding can be beneficial. Without mixing, wnet2vec at 1 outperforms fasttext at 0. For ESM, cosine performs slightly better than nicdm. Both measures perform considerably better than apsynp.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 138, |
|
"text": "Figure 7)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Interpolation (EMI)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The overall results are rather counterintuitive: If our distinction into easy and hard queries is correct, then the result would mean that the retrieval of easy queries benefits from embeddings and syntactic markers, whereas the retrieval of hard queries does not. To shed some light on what is really happening here, we looked at Recall at K (Manning et al., 2008) for both cases (see Figure 8 ). As expected when optimizing for nDCG, A excels in bringing many correct results to the very front of the result list (see K < 3). H on the other hand indeed focuses on queries with low scores -especially results that are not or hardly found at all. Starting at roughly K = 20 this clearly shows: H starts to recall more correct results than A. Closer inspection of the results shows that many of these hard queries contain only one or two tokens that exhibit any form of semantic alignment to the sentences in the ground truth. In other words, large parts of the query are ignored by the alignment engine when trying to find a match 13 . At the same time, the few tokens that do match are usually verbatim words from Shakespeare's texts. This observation explains H's strategy to disable large parts of the Vectorian pipeline and basically build a verbatim single word matcher to cope with these queries.", |
|
"cite_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 365, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 386, |
|
"end": 394, |
|
"text": "Figure 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Embedding Interpolation (EMI)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We have investigated the optimal configuration of various explicit parameters in a text reuse detection pipeline and showed that it is able to achieve nDCGs of roughly 80% on a rather difficult test set. While for some queries focusing on the interplay of word embeddings, POS tags and alignments is optimal, other queries seem to benefit from turning off these features. We have demonstrated, how these choices are generated naturally by maximizing arithmetic and harmonic means of nDCG scores. Our analysis uncovered important ideas of what makes queries hard for our current architecture and indicates the need for a ground truth that is classified and balanced in terms of difficulty. As Molz (2019) described considerably more Shakespeare references in his study, we plan to enhance the ground truth accordingly and classify the quotes according to different categories (e.g. verbatim quote, semantic paraphrase, changed word order, etc.). We hypothesize that different types of quotes will result in different optimal parameter combinations. This will be investigated in more detail in a follow-up study, where we will also look into other types of embeddings and explore single parameters in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "https://github.com/poke1024/vectorian/tree/v4 3 For a similar approach seeManjavacas et al. (2019).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/poke1024/simileco 5 https://universaldependencies.org/u/pos/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These were computed by running https://github.com/nlx-group/WordNetEmbeddings on 58,492 unique words from 66 novels that were part of the search corpus. As not all of these words were present in WordNet, the resulting embedding covers only 27,718 words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For simplicity, we give the unweighted case, though our implementation includes POS weighting here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The exact distribution is 44 : 1 (i.e. 44 queries with one result) , 13 : 2, 5 : 3, 4 : 4, 2 : 5, 2 : 6, 2 : 8, 1 : 10. 9 I.e. a full score is obtained if the specified ground truth results are retrieved first, regardless of their internal order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After marking these as correct, the nDCGs changed to 81.2% (for A)) and 78.9% (for H).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the green areas indicate those parameter values that produced the best reproduction of the given ground truth.12 The plot suggests that applying the weights from Batanovi\u0107 and Boji\u0107 (2015) fully -through a parameter value of 1would harm the search performance considerably.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A similar observation is true for any human expert, who, however, would have the advantage of knowing the quote's broader context from neighboring sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carmen", |
|
"middle": [], |
|
"last": "Banea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Gonzalez-Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "497--511", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 Task 1: Semantic Textual Similarity, Monolingual and Cross-Lingual Evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497-511, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Optuna: A Nextgeneration Hyperparameter Optimization Framework", |
|
"authors": [ |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Akiba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shotaro", |
|
"middle": [], |
|
"last": "Sano", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Toshihiko", |
|
"middle": [], |
|
"last": "Yanase", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takeru", |
|
"middle": [], |
|
"last": "Ohta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masanori", |
|
"middle": [], |
|
"last": "Koyama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2623--2631", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A Next- generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2623-2631, Anchorage AK USA, July. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The logic and discovery of textual allusion", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Bamman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Crane", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 LREC Workshop on Language Technology for Cultural Heritage Data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Bamman and Gregory Crane. 2008. The logic and discovery of textual allusion. In In Proceedings of the 2008 LREC Workshop on Language Technology for Cultural Heritage Data.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Text reuse detection using a composition of text similarity measures", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "B\u00e4r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "The COLING 2012 Organizing Committee", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel B\u00e4r, Torsten Zesch, and Iryna Gurevych. 2012. Text reuse detection using a composition of text similarity measures. In Proceedings of COLING 2012, pages 167-184, Mumbai, India, December. The COLING 2012 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Using Part-of-Speech Tags as Deep Syntax Indicators in Determining Short Text Semantic Similarity", |
|
"authors": [ |
|
{ |
|
"first": "Vuk", |
|
"middle": [], |
|
"last": "Batanovi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragan", |
|
"middle": [], |
|
"last": "Boji\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computer Science and Information Systems", |
|
"volume": "12", |
|
"issue": "1", |
|
"pages": "1--31", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vuk Batanovi\u0107 and Dragan Boji\u0107. 2015. Using Part-of-Speech Tags as Deep Syntax Indicators in Determining Short Text Semantic Similarity. Computer Science and Information Systems, 12(1):1-31, January.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A computational expedition into the undiscovered country -evaluating neural networks for the identification of hamlet text reuse", |
|
"authors": [ |
|
{ |
|
"first": "Maximilian", |
|
"middle": [], |
|
"last": "Bryan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Burghardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Molz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the 1st Workshop on Computational Humanities Research (CHR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maximilian Bryan, Manuel Burghardt, and Johannes Molz. 2020. A computational expedition into the undis- covered country -evaluating neural networks for the identification of hamlet text reuse. Proceedings of the 1st Workshop on Computational Humanities Research (CHR).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Measuring the influence of a work by text re-use", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "B\u00fcchler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annette", |
|
"middle": [], |
|
"last": "Ge\u00dfner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monica", |
|
"middle": [], |
|
"last": "Berti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Eckart", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Bulletin of the Institute of Classical Studies. Supplement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "63--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco B\u00fcchler, Annette Ge\u00dfner, Monica Berti, and Thomas Eckart. 2013. Measuring the influence of a work by text re-use. Bulletin of the Institute of Classical Studies. Supplement, pages 63-79.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The Bard meets the Doctor\" -Computergest\u00fctzte Identifikation intertextueller Shakespearebez\u00fcge in der Science Fiction-Serie Dr", |
|
"authors": [ |
|
{ |
|
"first": "Manuel", |
|
"middle": [], |
|
"last": "Burghardt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Selina", |
|
"middle": [], |
|
"last": "Meyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephanie", |
|
"middle": [], |
|
"last": "Schmidtbauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Molz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manuel Burghardt, Selina Meyer, Stephanie Schmidtbauer, and Johannes Molz. 2019. \"The Bard meets the Doctor\" -Computergest\u00fctzte Identifikation intertextueller Shakespearebez\u00fcge in der Science Fiction-Serie Dr. Who. Book of Abstracts, DHd.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The Tesserae Project: intertextual analysis of Latin poetry", |
|
"authors": [ |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Coffee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Pierre", |
|
"middle": [], |
|
"last": "Koenig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shakthi", |
|
"middle": [], |
|
"last": "Poornima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Forstall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roelant", |
|
"middle": [], |
|
"last": "Ossewaarde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Jacobson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Literary and Linguistic Computing", |
|
"volume": "28", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neil Coffee, Jean-Pierre Koenig, Shakthi Poornima, Christopher Forstall, Roelant Ossewaarde, and Sarah Jacob- son. 2012a. The Tesserae Project: intertextual analysis of Latin poetry. Literary and Linguistic Computing, 28(2):221-228, 07.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Intertextuality in the digital age", |
|
"authors": [ |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Coffee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Pierre", |
|
"middle": [], |
|
"last": "Koenig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shakthi", |
|
"middle": [], |
|
"last": "Poornima", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roelant", |
|
"middle": [], |
|
"last": "Ossewaarde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Forstall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Jacobson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Transactions of the American Philological Association", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "383--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Neil Coffee, Jean-Pierre Koenig, Shakthi Poornima, Roelant Ossewaarde, Christopher Forstall, and Sarah Jacob- son. 2012b. Intertextuality in the digital age. Transactions of the American Philological Association (1974-), pages 383-422.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Modeling the scholars: Detecting intertextuality through enhanced word-level n-gram matching", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Forstall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Coffee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Buck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Roache", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sarah", |
|
"middle": [], |
|
"last": "Jacobson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Digital Scholarship in the Humanities", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "503--515", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Forstall, Neil Coffee, Thomas Buck, Katherine Roache, and Sarah Jacobson. 2015. Modeling the scholars: Detecting intertextuality through enhanced word-level n-gram matching. Digital Scholarship in the Humanities, 30(4):503-515.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The digital breadcrumb trail of brothers grimm", |
|
"authors": [ |
|
{ |
|
"first": "Greta", |
|
"middle": [], |
|
"last": "Franzini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Franzini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabriela", |
|
"middle": [], |
|
"last": "Rotari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franziska", |
|
"middle": [], |
|
"last": "Pannach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahdi", |
|
"middle": [], |
|
"last": "Solhdoust", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "B\u00fcchler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greta Franzini, Emily Franzini, Gabriela Rotari, Franziska Pannach, Mahdi Solhdoust, and Marco B\u00fcchler. 2017. The digital breadcrumb trail of brothers grimm. Poster at the DATECH conference, G\u00f6ttingen.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Automatic detection of reuses and citations in literary texts", |
|
"authors": [ |
|
{ |
|
"first": "Jean-Gabriel", |
|
"middle": [], |
|
"last": "Ganascia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peirre", |
|
"middle": [], |
|
"last": "Glaudes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [ |
|
"Del" |
|
], |
|
"last": "Lungo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Literary and Linguistic Computing", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jean-Gabriel Ganascia, Peirre Glaudes, and Andrea Del Lungo. 2014. Automatic detection of reuses and citations in literary texts. Literary and Linguistic Computing, 29(3):412-421, 06.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Shakespeare After All", |
|
"authors": [ |
|
{ |
|
"first": "Marjorie", |
|
"middle": [], |
|
"last": "Garber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marjorie Garber. 2005. Shakespeare After All. Anchor Books.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Palimpseste. Die Literatur auf zweiter Stufe. Suhrkamp", |
|
"authors": [ |
|
{ |
|
"first": "G\u00e9rard", |
|
"middle": [], |
|
"last": "Genette", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G\u00e9rard Genette. 1993. Palimpseste. Die Literatur auf zweiter Stufe. Suhrkamp.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Hamilton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jure", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1489--1501", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1489-1501, Berlin, Germany. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Look thee, I speak play scraps': Digitally Mapping Intertextuality in Early Modern Drama", |
|
"authors": [ |
|
{ |
|
"first": "Regula", |
|
"middle": [], |
|
"last": "Hohl-Trillini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Regula Hohl-Trillini. 2019. 'Look thee, I speak play scraps': Digitally Mapping Intertextuality in Early Modern Drama. Oxford University, Bodleian and Folger Libraries, July.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Cumulated gain-based evaluation of IR techniques", |
|
"authors": [ |
|
{ |
|
"first": "Kalervo", |
|
"middle": [], |
|
"last": "J\u00e4rvelin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaana", |
|
"middle": [], |
|
"last": "Kek\u00e4l\u00e4inen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "422--446", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transac- tions on Information Systems, 20(4):422-446, October.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Accurate Image Search Using the Contextual Dissimilarity Measure", |
|
"authors": [ |
|
{ |
|
"first": "Herve", |
|
"middle": [], |
|
"last": "Jegou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cordelia", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hedi", |
|
"middle": [], |
|
"last": "Harzallah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Verbeek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTEL-LIGENCE", |
|
"volume": "32", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Herve Jegou, Cordelia Schmid, Hedi Harzallah, and Jakob Verbeek. 2010. Accurate Image Search Using the Contextual Dissimilarity Measure. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTEL- LIGENCE, 32(1):10.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "A Tool for Literary Studies: Intertextual Distance and Tree Classification. Literary and Linguistic Computing", |
|
"authors": [ |
|
{ |
|
"first": "Cyril", |
|
"middle": [], |
|
"last": "Labb\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dominique", |
|
"middle": [], |
|
"last": "Labb\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "311--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cyril Labb\u00e9 and Dominique Labb\u00e9. 2005. A Tool for Literary Studies: Intertextual Distance and Tree Classifica- tion. Literary and Linguistic Computing, 21(3):311-326, 10.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Mining of Massive Datasets", |
|
"authors": [ |
|
{ |
|
"first": "Jurij", |
|
"middle": [], |
|
"last": "Leskovec", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anand", |
|
"middle": [], |
|
"last": "Rajaraman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Ullman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jurij Leskovec, Anand Rajaraman, and Jeffrey D. Ullman. 2020. Mining of Massive Datasets. Cambridge Uni- versity Press, New York, NY, third edition edition.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "On the Feasibility of Automated Detection of Allusive Text Reuse", |
|
"authors": [ |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Manjavacas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Kestemont", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1905.02973" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Enrique Manjavacas, Brian Long, and Mike Kestemont. 2019. On the Feasibility of Automated Detection of Allusive Text Reuse. arXiv:1905.02973 [cs], May.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Introduction to Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prabhakar", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, USA.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Shakespeare and Quotation", |
|
"authors": [ |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Maxwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Rumbold", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julie Maxwell and Kate Rumbold. 2018. Shakespeare and Quotation. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Advances in Pre-Training Distributed Word Representations", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Puhrsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in Pre-Training Distributed Word Representations. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Interpretable Machine Learning. A Guide for Making Black Box Models Explainable", |
|
"authors": [ |
|
{ |
|
"first": "Christoph", |
|
"middle": [], |
|
"last": "Molnar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christoph Molnar. 2020. Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book/.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A close and distant reading of Shakespearean intertextuality", |
|
"authors": [ |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Molz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Johannes Molz. 2019. A close and distant reading of Shakespearean intertextuality. Ludwig-Maximilians- Universit\u00e4t M\u00fcnchen, Juli.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "WordNet Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Chakaveh", |
|
"middle": [], |
|
"last": "Saedi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ant\u00f3nio", |
|
"middle": [], |
|
"last": "Branco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o Ant\u00f3nio", |
|
"middle": [], |
|
"last": "Rodrigues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jo\u00e3o", |
|
"middle": [], |
|
"last": "Silva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of The Third Workshop on Representation Learning for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "122--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chakaveh Saedi, Ant\u00f3nio Branco, Jo\u00e3o Ant\u00f3nio Rodrigues, and Jo\u00e3o Silva. 2018. WordNet Embeddings. In Proceedings of The Third Workshop on Representation Learning for NLP, pages 122-131, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "A Rank-Based Similarity Metric for Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Enrico", |
|
"middle": [], |
|
"last": "Santus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongmin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emmanuele", |
|
"middle": [], |
|
"last": "Chersoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1805.01923" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Enrico Santus, Hongmin Wang, Emmanuele Chersoni, and Yue Zhang. 2018. A Rank-Based Similarity Metric for Word Embeddings. arXiv:1805.01923 [cs], May.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "The sense of a connection: Automatic tracing of intertextuality by meaning", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Scheirer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Forstall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Neil", |
|
"middle": [], |
|
"last": "Coffee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Digital Scholarship in the Humanities", |
|
"volume": "31", |
|
"issue": "1", |
|
"pages": "204--217", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter Scheirer, Christopher Forstall, and Neil Coffee. 2014. The sense of a connection: Automatic tracing of intertextuality by meaning. Digital Scholarship in the Humanities, 31(1):204-217, 10.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "The Penn Treebank: An Overview", |
|
"authors": [ |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Treebanks", |
|
"volume": "20", |
|
"issue": "", |
|
"pages": "5--22", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ann Taylor, Mitchell Marcus, and Beatrice Santorini. 2003. The Penn Treebank: An Overview. In Nancy Ide, Jean V\u00e9ronis, and Anne Abeill\u00e9, editors, Treebanks, volume 20, pages 5-22. Springer Netherlands, Dordrecht.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Some biological sequence metrics", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Waterman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Beyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "Advances in Mathematics", |
|
"volume": "20", |
|
"issue": "3", |
|
"pages": "367--387", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M.S Waterman, T.F Smith, and W.A Beyer. 1976. Some biological sequence metrics. Advances in Mathematics, 20(3):367-387, June.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Simplified overview of overall architecture.", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Steps and parameters involved in computing the similarity of two tokens. Details of the SIM FULL module fromFigure 1are shown on the right side. The SIM CORE sub module is shown on the left side. Solid lines show data flow, dotted lines show lookups.", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Histogram of nDCGs for variants A and H. Y axis is logarithmic.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Quantiles of nDCGs for variants A and H.", |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Ablations for various parameters of best configuration found through harmonic means of nDCGs. The x axis shows parameter values, the y axis shows achieved harmonic mean nDCG@50. Different plots expose different y ranges. Maximum values obtained per parameter are shaded green.", |
|
"uris": null |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"type_str": "figure", |
|
"text": "Ablations for various parameters of best configuration found through arithmetic mean of nD-CGs. The x axis shows parameter values, the y axis shows achieved arithmetic mean nDCG@50. Different plots expose different y ranges. Maximum values obtained per parameter are shaded green. Recall@K for A and H variants. The y range differs on the left and on the right.", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>Parameter</td><td>Distribution</td><td>Domain</td><td>A</td><td>H</td></tr><tr><td>Exclude Determiners</td><td>categorical</td><td>false, true</td><td>false</td><td>true</td></tr><tr><td colspan=\"5\">Embedding Interpolation Embedding Similarity Measure categorical cosine, nicdm, apsynp cosine nicdm uniform 1 .0 0.41 0 \uf8ff x \uf8ff 1</td></tr><tr><td>Inverse Frequency Scaling Similarity Falloff Semantic POS Weighting POS Mismatch Penality Similarity Threshold Mismatch Length Penalty Submatch Boosting</td><td>uniform uniform uniform uniform uniform int uniform</td><td>0 \uf8ff x \uf8ff 1 0 \uf8ff x \uf8ff 1 0 \uf8ff x \uf8ff 1 0 \uf8ff x \uf8ff 1 0 \uf8ff x \uf8ff 1 0 \uf8ff x \uf8ff 10 0 \uf8ff x \uf8ff 5</td><td>0 .0 0 .93 0 .46 0 .77 0 .73 1 0 .24</td><td>0.96 0.39 0.09 0.43 0.83 1 0.14</td></tr></table>", |
|
"num": null, |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Investigated parameter domains (first three columns) and best configurations found through Optuna search for arithmetic mean and harmonic mean (last two columns). Numerical values are rounded to nearest multiple of 0.01." |
|
} |
|
} |
|
} |
|
} |