|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T11:52:28.460000Z" |
|
}, |
|
"title": "Deduplication of Scholarly Documents using Locality Sensitive Hashing and Word Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Bikash", |
|
"middle": [], |
|
"last": "Gyawali", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Anastasiou", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Petr", |
|
"middle": [], |
|
"last": "Knoth", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Deduplication is the task of identifying near and exact duplicate data items in a collection. In this paper, we present a novel method for deduplication of scholarly documents. We develop a hybrid model which uses structural similarity (locality sensitive hashing) and meaning representation (word embeddings) of document texts to determine (near) duplicates. Our collection constitutes a subset of multidisciplinary scholarly documents aggregated from research repositories. We identify several issues causing data inaccuracies in such collections and motivate the need for deduplication. In lack of existing dataset suitable for study of deduplication of scholarly documents, we create a ground truth dataset of 100K scholarly documents and conduct a series of experiments to empirically establish optimal values for the parameters of our deduplication method. Experimental evaluation shows that our method achieves a macro F1-score of 0.90. We productionise our method as a publicly accessible web API service serving deduplication of scholarly documents in real time.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Deduplication is the task of identifying near and exact duplicate data items in a collection. In this paper, we present a novel method for deduplication of scholarly documents. We develop a hybrid model which uses structural similarity (locality sensitive hashing) and meaning representation (word embeddings) of document texts to determine (near) duplicates. Our collection constitutes a subset of multidisciplinary scholarly documents aggregated from research repositories. We identify several issues causing data inaccuracies in such collections and motivate the need for deduplication. In lack of existing dataset suitable for study of deduplication of scholarly documents, we create a ground truth dataset of 100K scholarly documents and conduct a series of experiments to empirically establish optimal values for the parameters of our deduplication method. Experimental evaluation shows that our method achieves a macro F1-score of 0.90. We productionise our method as a publicly accessible web API service serving deduplication of scholarly documents in real time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Publishing research findings as scholarly documents (publications) has always been the mainstream model for disseminating scientific research. To this end, authors publish their research outputs as scholarly documents and deposit them to one or more publishing platforms, repositories, of their choice. Such choices include institutional repositories, personal web pages, preprint services, academic publishers' platform and so on. Authors choosing to submit their article to multiple repositories have different motivations to do so, such as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Different versions of author's manuscript become suitable for submission to different repositories. Examples include, preprint repositories for submitting manuscripts that are yet to be peer reviewed, authors' personal web pages for hosting open access versions of their publications, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 Institutional as well as national policies mandate authors submit their research outputs to their own institution's repository.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 For publications with multiple authorship, each author may decide to submit it to one or more repositories of their own choice/institution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 By submitting the same research work to multiple different repositories, a wider audience can be reached (for example, documents from open access repositories are available to everyone, while the publisher might put the document behind a paywall); research can also be disseminated sooner thereby increasing accelerating the scholarly communication process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "While authors may submit exact duplicate copies of their research output to multiple repositories, they might also introduce slight variations (near duplicates) while submit-ting to different repositories (Klein et al., 2016) . Throughout the remainder of this text, we will use the generic term duplicates to refer to both exact and near duplicates. For example, authors might submit revised versions (preprint, author's copy, camera-ready) of the same research article to different repositories over time and different formats of document submissions (e.g. pdf, L A T E X) are also prevalent. Repositories usually rely on authors manually entering metadata information for their articles during submission. This gives way to introducing errors, omissions and typos in document metadata; creating documents with corrupt or missing metadata across multiple repositories. Such duplicates are of major concern to applications which target processing of scholarly documents aggregated from multiple repositories. Table 1 shows some example duplicates that can arise when aggregating documents from multiple repositories. Example A represents documents that are exact duplicates of each other but are, nevertheless, present in multiple repositories. Example B represents duplicates resulting from document formatting error while examples C and D represent duplicates arising from revisions, paraphrasing or updates to documents while they are submitted to different repositories. For text/data mining applications, these are redundant and/or inconsistent data which can skew data distribution and lead to an imbalanced dataset (Ko\u0142cz et al., 2003) . The task of identifying duplicates in a data collection is known as deduplication. In this paper, we present a novel deduplication method for scholarly documents. Many existing work on deduplication (e.g. (Chaudhuri et al., 2003) , (Jiang et al., 2014) ) describe methods for detecting duplicate entities (person, organization etc.) organised into databases/graphs and rely on direct matching of one or more attribute-value pairs (metadata) making up such data items. Other work (e.g. (Forman et al., 2005) , (Bogdanova et al., 2015) ) have discussed content based approaches for identifying duplicate documents and use similarity of tex-Possibly different paraphrasing of the title for the exactly same abstract; the duplicates can only be identified when comparing \"Abstract\" rather than \"Title\". Archivio della ricerca -Universit\u00e0 degli studi di Napoli Federico II Title = Vectorized simulations of normal processes for first-crossing-time problems Abstract = Motivated by a typical and ... first passage time probability densities. Table 1 : Examples of duplicates in documents aggregated from multiple repositories tual content of documents for the task. In particular, using document hash values for deduplication has been shown to be effective for deduplication of documents in specific collections (e.g. web corpus (Manku et al., 2007) , clinical notes (Shenoy et al., 2017) ) but similar study for deduplication of scholarly documents has not been reported so far. It is important that this study be carried out because i) scholarly collections have a number of issues related to data inaccuracies (see Section 2.) and therefore matching of attribute values cannot be reliably used ii) for scholarly documents, the only available content may often be short abstract text only (due to copyright issues) and iii) scholarly text is often technical in nature and has complex linguistic structure compared to general purpose text on the web. In this paper, we address this research gap and propose a hybrid method which takes into account different models of document content similarity for determining duplicates of scholarly documents. Namely, we build on top of matching structural similarity (using locality sensitive hash values) and meaning representations (using word embeddings) of documents' content for identifying duplicates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 225, |
|
"text": "(Klein et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1623, |
|
"end": 1643, |
|
"text": "(Ko\u0142cz et al., 2003)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1851, |
|
"end": 1875, |
|
"text": "(Chaudhuri et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1878, |
|
"end": 1898, |
|
"text": "(Jiang et al., 2014)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2131, |
|
"end": 2152, |
|
"text": "(Forman et al., 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 2155, |
|
"end": 2179, |
|
"text": "(Bogdanova et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2969, |
|
"end": 2989, |
|
"text": "(Manku et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 3007, |
|
"end": 3028, |
|
"text": "(Shenoy et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1010, |
|
"end": 1017, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 2682, |
|
"end": 2689, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We first construct a ground truth dataset labeling duplicates/non-duplicates in a collection of 100K scholarly documents aggregated from multiple different repositories and across scholarly disciplines. Next, we define separate deduplication methods based on different document similarity measures (locality sensitive hashing vs. word embeddings) and analyse their performance. Finally, we build a hybrid method which builds upon individual methods and empirically establish the best values for its parameters by conducting a series of experiments. We show that this method performs competitively towards correctly identifying duplicates (and non-duplicates) -a macro F1 score of 0.90 and an accuracy of 90.30% is obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We expose our deduplication system as a web API implemented over a much larger collection (over 130 million scientific documents) of research outputs aggregated from multiple repositories world-wide. By enabling open access to this collection and exposing the deduplication API, we create a man/machine interface to the deduplication service which identifies duplicate documents that exist across repositories for a given scientific document at hand. Our novel/main contributions are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 We propose and evaluate different content based deduplication methods (locality sensitive hashing vs. word embeddings) and study their effectiveness in the context of deduplication of scholarly documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 We design a new hybrid method for deduplication which builds upon the strength of individual methods and improves the performance of scholarly documents' deduplication.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 We construct a ground truth dataset of scholarly documents for deduplication purposes and make it publicly available. There are no existing datasets of this nature which we are aware of.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "\u2022 To our knowledge, we produce the first open API for finding duplicates of scientific documents in real time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In the context of scholarly documents, a reasonable approach to deduplication would seem to be matching document identifiers, especially the DOI, or other metadata information, such as the title or author names, associated with such documents. Repositories usually expose such information and make them available in a structured format e.g. xml suitable for automatic processing. However, deduplication approaches based on direct matching of such attributes would be far from ideal because they can't be reliably used for deduplication of scholarly documents across repositories. The Digital Object Identifier (DOI) is a common scheme used by publishers to give documents a unique identity. However, many documents with unassigned DOI exist such as those in preprint repositories. Likewise, repositories can expose erroneous DOIs to the documents they contain, for example, by using the generic DOI of a journal to all the articles within the journal. In a collection of scholarly documents we considered (Section 4.1.), more than 82% of documents did not have a DOI and we observed that the most frequent DOIs in the collection were generic DOIs (e.g.: 10.4028/www.scientific.net, 10.1093/mnras). We identified the most frequent 1, 000 DOIs in our source collection with their frequencies of occurrence -ranging from 65 to 45, 184. Also, it is not clear if near duplicates will have the same DOI at all, especially when they are submitted across different repositories. Similar problems appear with other metadata information such as document titles. For open access articles, OAI identifier is used as unique identifier of documents but it doesn't allow for detection of duplicates. We analysed the most frequent 8, 500 document titles in our source collection and observed that many had incorrectly assigned titles and with multiple occurrences (ranging from 96 to 549, 702) 1 . To summarise, deduplication methods based on complete matching of one more key-value attributes from document metadata are prone to generate large number of false positives and they can only identify exact duplicates at best. In this work, we instead focus on using document similarity measures that are capable of identifying both near and exact duplicates. Further, as detailed in Section 4., our methods will benefit from text processing of document content (both abstract and full text of documents) rather than simple matching of key-value attributes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Statement", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Deduplication is commonly carried out as three main subtasks: i) indexing ii) comparison and iii) classification. Indexing is the task of identifying key attributes of documents so that documents sharing common value for those attributes can be arranged into subgroups of their own, also called blocks. The comparison step benefits from such groupings since the lookup for duplicates for a given document can be restricted to comparing it with other documents within the same block only. For a given document, the comparison step assigns scores to other documents within its block representing their degree of match. Finally, the classification subtask defines a threshold above which documents having scores are predicted to be duplicates of the input document. For our deduplication task, we designate abstracts of scholarly documents as our key attributes. There are three main reasons that motivate this choice. First, abstracts are an integral part of any scientific document and summarize the central idea being described in the document. Second, abstracts are extensively available (in comparison to full text of documents, for example, which are often limited by copyright issues) and this greatly helps to reduce problem cases with null or missing values. Third, they are easily accessible because repositories usually expose structured bibliographic information e.g. an xml record of scholarly documents including their abstracts. As, we shall observe in Section 5., using document abstracts suffices for achieving a good performance deduplication system. In Section 4., we describe the details of our deduplication method that integrates two separate methods of identifying duplicates. Each of the individual methods use separate models for the comparison subtask -bitwise matching of hash values and cosine vector similarity of document abstracts, respectively. In the first method, comparison assigns scores ranging from 0 to 64 (the maximum possible bitwise difference in a 64-bit hashing scheme) while cosine similarity value ranges from \u22121 (completely different) to 1 (exactly same) for the second method. We establish their duplicates classification threshold values empirically (Section 5.2.).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Deduplication Overview", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "To the best of our knowledge, there are no existing datasets of scholarly documents fit for the purpose of deduplication experiments. We, therefore, build one ourselves and this involves two main tasks. First is the task of obtaining a collection of scholarly documents present across multiple repositories. The second task is then to label each document in this collection with information of duplicates present for each of them within the collection. On completing these tasks, we obtain a labeled dataset of duplicates in a collection of scholarly documents suitable for our deduplication experiments. For the first task, we use CORE (Knoth and Zdrahal, 2012) , the world's largest aggregator of openly accessible scientific documents. At the time of writing this manuscript, CORE consists of more than 177 million of scientific documents aggregated from over 9, 867 repositories around the world. Owing to the issues we highlighted in Section 1., we posit that it contains a significant number of duplicates. We extract 1, 687, 044 document records from the CORE such that each document has a title (more than 20 characters long), abstract (more than 500 characters long), full text (more than 5000 characters long) and a DOI conforming to standard regex pattern (Gilmartin, 2015) . This helps us in getting started with a collection that is free of missing or null values, unusually short text or incorrect DOIs. Further tasks are needed to improve the quality of our dataset. We convert the title, abstract and full text of documents to lowercase and replace multiple spaces by one. In full text and abstracts, we strip out formatting characters (e.g. newline, tab, space), URLs (using regex pattern), punctuation characters, digits and stop words. In this collection of 1, 687, 044 document records, we identify the most frequent 1, 500 sentences (string of text followed by a dot character and a space) and words (token delimited by space) occurring in their full text. Manual analysis shows that the most frequent sentences and words are boilerplate text 2 . Subsequently, we remove any occurrence of these text from all the document abstracts. The full text of documents are no longer needed for our purposes and are dropped. Next, we clean our dataset to filter out records with possibly incorrect or malformed DOIs and titles. To address the issue of generic DOIs being assigned to documents, we define a simple heuristic that a DOI (which is defined by prefix/suffix structure) for a document must have a suffix which is not just a sequence of alphanumeric characters only. This helps to filter out documents with DOIs such as 10.1093/bioinformatics (which refers to a journal) but preserve others like 10.1088/0953-8984/21/17/175601. We further remove all those documents in our dataset whose DOI belongs to the list of 1, 000 most frequent DOIs and/or whose title belongs to the list of 8, 500 most frequent titles identified in the CORE's total collection (previously discussed in Section 2.). The resulting dataset amounts to 1, 525, 199 document records.", |
|
"cite_spans": [ |
|
{ |
|
"start": 637, |
|
"end": 662, |
|
"text": "(Knoth and Zdrahal, 2012)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1267, |
|
"end": 1284, |
|
"text": "(Gilmartin, 2015)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Labeled Dataset Creation", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Based on the DOIs, we then proceed to bucket document records into groups such that each group contains all the documents which have the same DOI. A singleton group identifies a non-duplicate document while a group x with n elements; n > 1 indicates a duplicate group x with n documents in it having the same DOI. We observe 1, 320, 551 non-duplicate groups and 204, 648 duplicate groups. From the non-duplicate groups, we randomly select 50, 000 groups (amounting to an equal number of document records). From the duplicate groups, on one hand, we randomly select 11, 473 buckets (amounting to 25, 000 document records) which meet the criteria that within a bucket all its document records contain exactly matching titles and exactly matching abstracts. On the other hand, we randomly pick 10, 448 buckets (amounting to 25, 000 document records) such that each bucket contains document records whose titles are not all the same and their abstracts differ as well. The former 25K records is our approximation of exact duplicates, the latter 25K for near duplicates and the first 50K for non-duplicates; thereby creating a duplicates/non-duplicates balanced dataset of 100K document records -our Ground Truth dataset. We release this dataset as a publicly accessible download from https: //core.ac.uk/documentation/dataset/ . Table 2 shows the schema of our ground truth dataset. For a given document in the dataset, its duplicates are all other documents contained in the same group as that of the input document. The group size of a group is defined as the number of documents present within that group. The rightmost column in Table 2 represents the duplicates (list of CORE IDs) identified for a given document (CORE ID) on the left.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1325, |
|
"end": 1332, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1629, |
|
"end": 1636, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeled Dataset Creation", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "In the resulting collection of duplicates groups, Figure 1 reveals the frequencies of their sizes. Namely, we have 18, 083 different duplicate groups each having 2 duplicates, 2, 540 groups each with 3 duplicates and so on. The duplicate group sizes occurring in the ground truth dataset range from 2 to 14, meaning that duplicate groups are formed by identifying at least 2 documents that are duplicates of each other and in some cases, we observe as many as 14 documents that are duplicates to each other.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 58, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Labeled Dataset Creation", |
|
"sec_num": "4.1." |
|
}, |
|
{ |
|
"text": "Having carefully built the ground truth dataset, we start by establishing a baseline which uses exact title matching method. For an input document x, we retrieve all other documents y in the collection with title string matching exactly that of x. The comparison step assigns a score of 1 to matching documents and 0 otherwise. The set of all y identified for the document x constitutes its duplicates, i.e. the classification is based on the criteria that duplicate documents have a comparison score of 1. This method, however, has a number of limitations. At best, it can only identify duplicates which exactly match on their titles. Ideally, we would like to have a solution that i) matches exact duplicates ii) is robust to account for near duplicates and iii) provides parameters that can be tuned for specifying desired level of variability in documents' text in order for them to be considered near duplicates. This draws our attention to two distinct methods based on content similarity -the simhash matching method and the document vectors similarity method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline -Exact Title Matching", |
|
"sec_num": "4.2." |
|
}, |
|
{ |
|
"text": "Hash algorithms are functions that map data of arbitrary size (e.g. abstract text of a scholarly document) to data of fixed size (n-bit string). Hashing algorithms have been widely used for document deduplication, as in (Forman et al., 2005) , (Hoad and Zobel, 2003) , (Bernstein and Zobel, 2004) etc. This is desirable because it allows to implement a uniform approach to comparing variable length documents -documents can instead be compared based on their fixed length hash values. A common choice of a hashing algorithm used for deduplication is the simhash function with n = 64 bit encoding scheme. Once the hash values are obtained, documents can be compared based on hamming distance between their hash values. For a given document pair, hamming distance of 0 indicates that the documents are exact duplicates of each other while higher values represent increasing degree of dissimilarity between them. Notably, simhash belongs to the class of locality sensitive hashing functions which have the characteristic property that similar documents, i.e. near duplicates, produce similar hash values, i.e. have low hamming distances. The choice of a particular value of hamming distance is specific to the deduplication task at hand and forms the basis of categorising all documents within that hamming distance as near duplicates for a given input document. It is evident that simhash fulfills all the requirements we just discussed for our deduplication task. Furthermore, simhash Figure 1 : Frequencies of duplicate groups on ground dataset and as predicted by our different methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 220, |
|
"end": 241, |
|
"text": "(Forman et al., 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 266, |
|
"text": "(Hoad and Zobel, 2003)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 296, |
|
"text": "(Bernstein and Zobel, 2004)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1484, |
|
"end": 1492, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Simhash Matching", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "based deduplication has been shown to be a scalable solution for deduplication on document collection of significantly large size (70 million in case of (Sood and Loguinov, 2011) and 8 billion for (Manku et al., 2007) ). Naturally, all these factors form the basis of us choosing a simhash based method for finding duplicates. In our implementation, we first map the documents in our collection to their hash values by applying the simhash function to their respective abstracts. This can be obtained as part of pre-processing our document collection and we use an open source implementation of simhash algorithm for the task 3 . Next, the documents are compared based on their hash values. The comparison score assigned to a document is its hamming distance with the input document. Given a threshold value (hamming distance) \u03b1, any documents (x,y) can be inferred to be duplicates of each other if they have a comparison score \u03ba such that \u03ba \u2264 \u03b1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 178, |
|
"text": "(Sood and Loguinov, 2011)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 217, |
|
"text": "(Manku et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simhash Matching", |
|
"sec_num": "4.3." |
|
}, |
|
{ |
|
"text": "A widely used approach to text processing is using low dimensional vectors (also known as word embeddings) to represent the meanings of words. The Word2vec algorithm (Mikolov et al., 2013) popularized this approach as a process of training a neural network model to obtain vectors for words based on their distribution in a large text corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 188, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Vectors Similarity", |
|
"sec_num": "4.4." |
|
}, |
|
{ |
|
"text": "Since then, many deep learning models have been proposed for the task. BERT (Devlin et al., 2018 ) is a recently proposed deep learning model for building language representations and it has been shown to produce state-of-the-art results when used for a number of text processing tasks. The final layer of the BERT model outputs n-dimensional 4 vector for each (sub)words in input sentence and these are dense vectors based on the context (i.e. the input sentence). Many pre-trained BERT models are openly available for end user tasks. In this work, we use a pre-trained BERT model released by (Guo et al., 2019 ) (BERT BASE model trained on BooksCorpus (Zhu et al., 2015) and English Wikipedia text) and use an open source library 5 to obtain word vectors for document text and apply it for our deduplication purposes. Specifically, for each document in our ground truth dataset, we split its abstract text into a list of 3 https://github.com/seomoz/simhash-py 4 768 (BASE model) or 1024 (LARGE model) 5 https://github.com/imgarylai/ bert-embedding sentences 6 . We feed the sentences, in turn, to the BERT model and retrieve vectors for each (sub)words as identified by BERT. BERT uses WordPiece tokenization (Wu et al., 2016) to identify (sub)words in the input text and this makes it robust for predicting vectors for out-of-vocabulary words which may occur in our input. We compute a single vector (i.e. a document vector, d x ) representing a document x as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 96, |
|
"text": "(Devlin et al., 2018", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 611, |
|
"text": "(Guo et al., 2019", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 654, |
|
"end": 672, |
|
"text": "(Zhu et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1228, |
|
"text": "(Wu et al., 2016)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Vectors Similarity", |
|
"sec_num": "4.4." |
|
}, |
|
{ |
|
"text": "d x = m\u2208Sentences(x) 1 |m| * n\u2208W ords(m) \u2212 \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212 \u2192 BERT (n, m) |n| where \u2212 \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212 \u2192 BERT (n, m)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Vectors Similarity", |
|
"sec_num": "4.4." |
|
}, |
|
{ |
|
"text": "is the vector identified by BERT model for word n in sentence m. We compute document vectors for each of the documents in our ground truth dataset and use that as a basis of determining similarity of the documents. For any document pair (y,z), the comparison score \u03ba is the cosine similarity value of their document vectors( d y , d z ) and the documents are considered to be duplicates of each other if \u03ba \u2265 \u03b2 for some classification threshold \u03b2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Document Vectors Similarity", |
|
"sec_num": "4.4." |
|
}, |
|
{ |
|
"text": "The simhash matching method and the document vector similarity method inherently work on a different level of textual representation. The former method treats document abstracts at a structural level -looking for overlap of words or characters in surface representation of text. The document vector method, on the other hand, approaches deduplication from the perspective of meaning similarity. We therefore propose a hybrid method to deduplication which makes use of both these methods. Our motivation here is to understand whether these methods complement each other in building a better deduplication system. Given different thresholds for simhash similarity (\u03b1 1, \u03b1 2 . . . . . . \u03b1 m) and document vector similarity (\u03b2 1, \u03b2 2 . . . . . . \u03b2 n) methods, the hybrid method generates prediction for duplicates as outlined in Algorithm 1. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hybrid Method", |
|
"sec_num": "4.5." |
|
}, |
|
{ |
|
"text": "To evaluate our methods, we use the standard metrics of precision and recall for both duplicate and non-duplicate classes. In addition, we report the macro F1 average and accuracy values to reflect overall performance. As we observed in Table 2 , for any given document (say d), there can be a set (say X d ) consisting of zero or more documents in the ground truth dataset labelled as its duplicates. Likewise, for the same input document d, predictions from our methods can result in a set (say Y d ) of documents as duplicates. Under each of our experiments, we can identify a prediction Y d to belong to one of the following categories.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 244, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "\u2022 a true positive (TP) if", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "X d \u2282 Y d and X d = \u03c6 and Y d = \u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "\u2022 a false positive (FP) if Y d = \u03c6, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "(X d \u2282 Y d or X d = \u03c6)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "\u2022 a true negative (TN) if", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "X d = Y d = \u03c6. \u2022 a false negative (FN) if Y d = \u03c6 but X d = \u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "The confusion matrix looks like the one shown in Table 3 . By conducting our experiments over all the documents present in the ground truth dataset and noting their predictions, we can compute the count of true positives, false positives, true negatives and false negatives. Based on the confusion matrix, we evaluate the outcome of an experiment using the standard metrics of precision, recall, accuracy and macro-F1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 56, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "X d = \u03c6 X d = \u03c6 Y d = \u03c6 (X d \u2282 Y d ) =\u21d2 TP (X d \u2282 Y d ) =\u21d2 FP FP Y d = \u03c6 FN TN", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "5.1." |
|
}, |
|
{ |
|
"text": "The baseline method is non-parametric and does not have multiple classification thresholds to define. Therefore a single iteration of this method on the ground dataset is sufficient to understand its performances. The simhash matching method is defined by a number of parameters. These include:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "\u2022 Content Unit: Simhash is based on the principle of building representation of total content by composing representations obtained for smaller units of data that make the content. Typically, text can be represented as sequence of characters or words and we explore the possibility of using both these content units in implementing our simhash method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "\u2022 Shingles Size: The hash values obtained by simhash are characterized by the span of content units that involve in making a single unit of representation. Often referred to as 'shingles', we can specify the value for it's size to define what number number of content units (words/characters) in sequence should be considered as a single token while building up the document representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "\u2022 Hamming Distance: Hamming distance is the number of positions at which bits of two hash values differ. By specifying different values of hamming distance, we can define different thresholds for the deduplication classification subtask. Document pairs under consideration are considered to be duplicates if their hamming distance does not exceed the threshold value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "We experimented with 338 different configurations of this method and obtained different evaluation scores. Figure 2a and 2b show the different macro F1 scores obtained for different choices of shingle size and hamming distance when using words and characters as content unit, respectively. For the document vector similarity method, a number of different prediction scores can be obtained by using different threshold values for classification. Figure 3 shows the macro F1 scores obtained on using different choices of cosine similarity values as the threshold value for duplicate classification. In total, we experimented with 19 different threshold values. The hybrid method benefits from multiple combination possibilities of parameter values of the simhash and document vectors similarity methods. In total, we evaluate the hybrid method on 6, 422 unique configuration of parameter values resulting from such combinations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 116, |
|
"text": "Figure 2a", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 445, |
|
"end": 453, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "In Table 4 , we list the best scoring configuration of the parameters for each of our methods and Figure 1 shows the group size frequencies of the duplicates they predict. Overall, we see that the hybrid method has the best scoring macro F1 score and has a similar distribution of group size frequencies as observed for the ground truth dataset. This indicates that the hybrid method is the best for predicting both duplicates and non duplicates and is, therefore, the deduplication model of our choice. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.2." |
|
}, |
|
{ |
|
"text": "Looking into the evaluation scores in Table 4 , the simhash matching method by itself does not seem to perform any better than the baseline method. However, for reasons discussed earlier, exact matching approaches would lead to a large number of false positives in a real world scenario; especially with matching titles. Our ground truth dataset was carefully curated to avoid erroneous titles and therefore the evaluation scores can be expected to be in favor of the baseline method. What our experiment demonstrates ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 4", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results & Discussion", |
|
"sec_num": "6." |
|
}, |
|
{ |
|
"text": "Figure 3: Macro F1 scores obtained for different Document vector similarity approach instead is that simhash matching method can be a good starting ground for identifying duplicates with variations in their content. Further, we note that the document vector similarity method is the main contributor towards obtaining significant performance gains. This likely signifies that duplicates of scholarly documents are not simply variations in character/string positions but are rather semantically related paraphrases of content. Allowing for both the structural variation and meaning representation of text, we observe that the hybrid model achieves the best performance score. Apart from the perspective of gaining better evaluation scores, there are also pragmatic reasons to adopt a hybrid method. This is evident in a real world deduplication scenario since we host our deduplication service as an openly accessible web API at https://core.ac.uk/docs/ #!/articles/nearDuplicateArticles . We notice that the users would like to (optionally) obtain duplicates based on one or more pieces of additional information they may have (e.g. author names + title + year of publication) rather than specifically looking for duplicates based on similarity of abstract text only. In such cases, we take a step further and integrate (in much the same manner as done for the hybrid method) the results obtained by exact matching of user supplied attributes with the results obtained from our hybrid method for serving API responses. In Figure 1 , we see a long-tailed distribution of duplicate group sizes with very low frequencies in the ground truth dataset. On manual examination, we notice that some of these low-frequency groups are formed because of incorrectly assigned DOIs. Despite taking great care in filtering documents with erroneous DOIs, we are not able to automatically filter out all such DOIs in our ground truth dataset. The incorrect DOIs can lead to fewer number of duplicates identified for a group during the ground truth dataset creation. This can result in our methods (which are based on comparing similarities of abstract text) predicting higher number of duplicates for an input document than those identified for it in the ground truth dataset. For this reason, we considered a prediction to be true positive (in Section 5.1.) if it contained all the elements of the labelled set and not necessarily both these being equal. The incorrect DOIs and/or other erroneous metadata information do, in fact, propagate from source repositories where they are originally hosted and there remains very little at our end to try and resolve these issues. In this work, we only considered documents with English text. Many different factors motivated this choice; mainly the ubiquitous support for English language text processing; availability of open source libraries and pretrained word embedding models on large corpus of English text. Many other pre-trained word embeddings (e.g. (Bojanowski et al., 2016) , (Beltagy et al., 2019) ) are also available apart from the one we used in this work. Further experiments will be needed to study the performance of our method under these settings.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2986, |
|
"end": 3011, |
|
"text": "(Bojanowski et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 3014, |
|
"end": 3036, |
|
"text": "(Beltagy et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1522, |
|
"end": 1530, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Cosine similarity", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A number of previous studies have presented deduplication in the context of a variety of practical applications. Examples include deduplication for detecting plagiarised content (Hoad and Zobel, 2003) , (Bernstein and Zobel, 2004) , improving quality of web search (Manku et al., 2007) , (Su et al., 2010) , (Syed Mudhasir et al., 2011) , finding similar files in document repositories (Manber, 1994) , (Forman et al., 2005) , measuring source code similarity of software systems (Yamamoto et al., 2005) . Broadly speaking, existing work on deduplication can be classified into two main categories based on the approaches they adopt. In the first category of work, we see deduplication approach based on matching of values of attributes that make up the data items. This approach is fairly common with deduplication of records present in structured content systems such as databases (e.g. (Chaudhuri et al., 2003) ). The second category of approaches are based on comparing semantic similarity of document contents. For example, (Forman et al., 2005) , (Manber, 1994) , (Shenoy et al., 2017) use different hashing functions (MD5 hash, minhash etc.) over document text to obtain document hash values. Likewise, (Bogdanova et al., 2015) , (Zhang et al., 2017) use Word2vec (Mikolov et al., 2013) embeddings to represent questions posted in online user forums and use that for identifying semantically related question pairs. Training machine learning models (Su et al., 2010) and more recently, deep learning (Mudgal et al., 2018) have also been proposed in this regard. In comparison, our work uses i) simhash function; a locality sensitive hashing function introduced by (Charikar, 2002) ii) word embeddings coming from the BERT model (Devlin et al., 2018) and iii) builds upon the power of pre-trained language representation model instead of training a neural network specific to the purpose.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 200, |
|
"text": "(Hoad and Zobel, 2003)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 230, |
|
"text": "(Bernstein and Zobel, 2004)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 285, |
|
"text": "(Manku et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 305, |
|
"text": "(Su et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 308, |
|
"end": 336, |
|
"text": "(Syed Mudhasir et al., 2011)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 400, |
|
"text": "(Manber, 1994)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 424, |
|
"text": "(Forman et al., 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 480, |
|
"end": 503, |
|
"text": "(Yamamoto et al., 2005)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 889, |
|
"end": 913, |
|
"text": "(Chaudhuri et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1029, |
|
"end": 1050, |
|
"text": "(Forman et al., 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1053, |
|
"end": 1067, |
|
"text": "(Manber, 1994)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 1070, |
|
"end": 1091, |
|
"text": "(Shenoy et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1210, |
|
"end": 1234, |
|
"text": "(Bogdanova et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1237, |
|
"end": 1257, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1271, |
|
"end": 1293, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1456, |
|
"end": 1473, |
|
"text": "(Su et al., 2010)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1507, |
|
"end": 1528, |
|
"text": "(Mudgal et al., 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1671, |
|
"end": 1687, |
|
"text": "(Charikar, 2002)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "More related to our study are works focusing on deduplication of scholarly data. (Jiang et al., 2014 ) define a multistep rule-based method for deduplication of bibliographic metadata records (BibTeX records) of biomedical scholarly documents. They use exact matching on attribute-value pairs (e.g. DOI, repository specific identifier such as the PubMed ID number, author names) of the records; (Qi et al., 2013) also put manual effort to correctly identify duplicates on such databases. (Canalle et al., 2017) define several metrics (repetition, distinctiveness, density etc.) to study the importance of different attributes of bibliographic datasets when used for deduplication task. A recent work (Atzori et al., 2018 ) studies deduplication of entities related to scholarly publication (e.g. datasets, organizations, research funders) as present in big scholarly communication graphs such as the OpenAIRE scholarly communication graph (https://api.openaire.eu/). In terms of content based approaches to scholarly document deduplication, (Labb\u00e9 and Labb\u00e9, 2013 ) study forgeries of research outputs published in a few conferences and use inter-textual distance as a measure of document similarity. The authors define their own measure of inter-textual distance based on word frequencies but it is not clear how it would compare to other highly successful methods which have been reported for deduplication of documents outside the scholarly domain text. For example, locality sensitive hashing method has been successfully used for deduplication of web corpus (Manku et al., 2007) , technical documentations (Forman et al., 2005) and clinical notes (Shenoy et al., 2017) . Similarly, word vectors have been used for deduplication of related question pairs (Bogdanova et al., 2015) . In our work, we pursue the study of deduplication of scholarly documents. Like (Labb\u00e9 and Labb\u00e9, 2013) , we follow the content based approach to deduplication but build upon the strength of both locality sensitive hashing and word embeddings methods. These methods were studied in isolation for specific data collections in the past but our work shows that both these methods produce results which complement each other and therefore, a hybrid method should be used for obtaining the best performing model for deduplication of scholarly documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 100, |
|
"text": "(Jiang et al., 2014", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 395, |
|
"end": 412, |
|
"text": "(Qi et al., 2013)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 510, |
|
"text": "(Canalle et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 700, |
|
"end": 720, |
|
"text": "(Atzori et al., 2018", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1041, |
|
"end": 1063, |
|
"text": "(Labb\u00e9 and Labb\u00e9, 2013", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1563, |
|
"end": 1583, |
|
"text": "(Manku et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1611, |
|
"end": 1632, |
|
"text": "(Forman et al., 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 1652, |
|
"end": 1673, |
|
"text": "(Shenoy et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1759, |
|
"end": 1783, |
|
"text": "(Bogdanova et al., 2015)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1865, |
|
"end": 1888, |
|
"text": "(Labb\u00e9 and Labb\u00e9, 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "7." |
|
}, |
|
{ |
|
"text": "We produced a labelled dataset of 100K scholarly documents suitable for deduplication studies and proposed a novel method to deduplication of scholarly documents -a hybrid method using simhash and document vectors similarity. With an extensive set of experiments, we established the optimal values for the parameters of the hybrid method; achieving a macro F1-score of 0.90 and an accuracy of 90.30%. This is well above the performance obtained from a baseline system and over the individual methods making up the hybrid method. As a practical outcome of our research, we deploy our deduplication service as a publicly accessible web API and publicly release our dataset to the global audience.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8." |
|
}, |
|
{ |
|
"text": "The frequent titles as well as DOIs are provided as separate files in the dataset we release.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Also included as separate files in the dataset we release.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use a simple regular expression that splits on full stop and question marks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Gdup: De-duplication of scholarly communication big graphs", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Atzori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Manghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "5th IEEE/ACM International Conference on Big Data Computing Applications and Technologies, BDCAT 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atzori, C., Manghi, P., and Bardi, A. (2018). Gdup: De-duplication of scholarly communication big graphs. In 5th IEEE/ACM International Conference on Big Data Computing Applications and Technologies, BDCAT 2018, Zurich, Switzerland, December 17-20, 2018, pages 142-151.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Scibert: Pretrained contextualized embeddings for scientific text", |
|
"authors": [ |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Beltagy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Cohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beltagy, I., Cohan, A., and Lo, K. (2019). Scibert: Pre- trained contextualized embeddings for scientific text.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A scalable system for identifying co-derivative documents", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "String Processing and Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernstein, Y. and Zobel, J. (2004). A scalable system for identifying co-derivative documents. In Alberto Apos- tolico et al., editors, String Processing and Informa- tion Retrieval, pages 55-67, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Detecting semantically equivalent questions in online user forums", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bogdanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Santos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Barbosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Zadrozny", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--131", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bogdanova, D., dos Santos, C., Barbosa, L., and Zadrozny, B. (2015). Detecting semantically equivalent questions in online user forums. In Proceedings of the Nineteenth Conference on Computational Natural Language Learn- ing, pages 123-131.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1607.04606" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2016). Enriching word vectors with subword informa- tion. arXiv preprint arXiv:1607.04606.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A strategy for selecting relevant attributes for entity resolution in data integration systems", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Canalle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "L\u00f3scio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Salgado", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 19th International Conference on Enterprise Information Systems", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "80--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Canalle, G. K., L\u00f3scio, B. F., and Salgado, A. C. (2017). A strategy for selecting relevant attributes for entity res- olution in data integration systems. In Proceedings of the 19th International Conference on Enterprise Infor- mation Systems -Volume 1: ICEIS,, pages 80-88. IN- STICC, SciTePress.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Similarity estimation techniques from rounding algorithms", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Charikar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Thiryfourth Annual ACM Symposium on Theory of Computing, STOC '02", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "380--388", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charikar, M. S. (2002). Similarity estimation techniques from rounding algorithms. In Proceedings of the Thiry- fourth Annual ACM Symposium on Theory of Comput- ing, STOC '02, pages 380-388, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Robust and efficient fuzzy match for online data cleaning", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chaudhuri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ganjam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Ganti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Motwani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 ACM SIGMOD international conference on Management of data", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "313--324", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chaudhuri, S., Ganjam, K., Ganti, V., and Motwani, R. (2003). Robust and efficient fuzzy match for online data cleaning. In Proceedings of the 2003 ACM SIGMOD in- ternational conference on Management of data, pages 313-324. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-W", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Finding similar files in large document repositories", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Forman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Eshghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chiocchetti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "394--400", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Forman, G., Eshghi, K., and Chiocchetti, S. (2005). Find- ing similar files in large document repositories. In Pro- ceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD '05, pages 394-400, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Dois and matching regular expressions", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gilmartin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gilmartin, A. (2015). Dois and matching regular ex- pressions. https://www.crossref.org/blog/ dois-and-matching-regular-expressions/.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lausen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guo, J., He, H., He, T., Lausen, L., Li, M., Lin, H., Shi, X., Wang, C., Xie, J., Zha, S., Zhang, A., Zhang, H., Zhang, Z., Zhang, Z., and Zheng, S. (2019). Gluoncv and gluonnlp: Deep learning in computer vision and nat- ural language processing.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Methods for identifying versioned and plagiarized documents", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Hoad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zobel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "J. Am. Soc. Inf. Sci. Technol", |
|
"volume": "54", |
|
"issue": "3", |
|
"pages": "203--215", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hoad, T. C. and Zobel, J. (2003). Methods for identifying versioned and plagiarized documents. J. Am. Soc. Inf. Sci. Technol., 54(3):203-215, February.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Rule-based deduplication of article records from bibliographic databases", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Smalheiser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Database", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jiang, Y., Lin, C., Meng, W., Yu, C., Cohen, A. M., and Smalheiser, N. R. (2014). Rule-based deduplication of article records from bibliographic databases. Database, 2014:bat086.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Comparing published scientific journal articles to their pre-print versions", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Broadwell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Farb", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Grappone", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, JCDL '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klein, M., Broadwell, P., Farb, S. E., and Grappone, T. (2016). Comparing published scientific journal articles to their pre-print versions. In Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, JCDL '16, pages 153-162, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Core: three access levels to underpin open access. D-Lib Magazine", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Knoth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Zdrahal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Knoth, P. and Zdrahal, Z. (2012). Core: three access levels to underpin open access. D-Lib Magazine, 18(11/12).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Data duplication: An imbalance problem?", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ko\u0142cz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Chowdhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alspector", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ko\u0142cz, A., Chowdhury, A., and Alspector, J. (2003). Data duplication: An imbalance problem?", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Duplicate and fake publications in the scientific literature: how many scigen papers in computer science?", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Labb\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Labb\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Scientometrics", |
|
"volume": "94", |
|
"issue": "1", |
|
"pages": "379--396", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Labb\u00e9, C. and Labb\u00e9, D. (2013). Duplicate and fake pub- lications in the scientific literature: how many scigen pa- pers in computer science? Scientometrics, 94(1):379- 396.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Finding similar files in a large file system", |
|
"authors": [ |
|
{ |
|
"first": "U", |
|
"middle": [], |
|
"last": "Manber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "USENIX WINTER 1994 TECHNICAL CONFER-ENCE", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--10", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manber, U. (1994). Finding similar files in a large file sys- tem. In USENIX WINTER 1994 TECHNICAL CONFER- ENCE, pages 1-10.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Detecting near-duplicates for web crawling", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Manku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Jain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Das", |
|
"middle": [], |
|
"last": "Sarma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 16th International Conference on World Wide Web, WWW '07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "141--150", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manku, G. S., Jain, A., and Das Sarma, A. (2007). Detect- ing near-duplicates for web crawling. In Proceedings of the 16th International Conference on World Wide Web, WWW '07, pages 141-150, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111- 3119.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Deep learning for entity matching: A design space exploration", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mudgal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Rekatsinas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Doan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Krishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Deep", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Arcaute", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Raghavendra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 International Conference on Management of Data, SIGMOD '18", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mudgal, S., Li, H., Rekatsinas, T., Doan, A., Park, Y., Kr- ishnan, G., Deep, R., Arcaute, E., and Raghavendra, V. (2018). Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 Interna- tional Conference on Management of Data, SIGMOD '18, pages 19-34, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Find duplicates among the pubmed, embase, and cochrane library databases in systematic review", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fan", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "PLoS One", |
|
"volume": "8", |
|
"issue": "8", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qi, X., Yang, M., Ren, W., Jia, J., Wang, J., Han, G., and Fan, D. (2013). Find duplicates among the pubmed, em- base, and cochrane library databases in systematic re- view. PLoS One, 8(8):e71838.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Deduplication in a massive clinical note dataset", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Shenoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T.-T", |
|
"middle": [], |
|
"last": "Kuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Gabriel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Mcauley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C.-N", |
|
"middle": [], |
|
"last": "Hsu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1704.05617" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shenoy, S., Kuo, T.-T., Gabriel, R., McAuley, J., and Hsu, C.-N. (2017). Deduplication in a massive clinical note dataset. arXiv preprint arXiv:1704.05617.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Probabilistic nearduplicate detection using simhash", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Loguinov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1117--1126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sood, S. and Loguinov, D. (2011). Probabilistic near- duplicate detection using simhash. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, pages 1117-1126, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Record matching over query results from multiple web databases", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lochovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IEEE transactions on Knowledge and Data Engineering", |
|
"volume": "22", |
|
"issue": "4", |
|
"pages": "578--589", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su, W., Wang, J., and Lochovsky, F. H. (2010). Record matching over query results from multiple web databases. IEEE transactions on Knowledge and Data Engineering, 22(4):578-589.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Near-duplicates detection and elimination based on web provenance for effective web search", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Syed Mudhasir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Deepika", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sendhilkumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Mahalakshmi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "International Journal on Internet & Distributed Computing Systems", |
|
"volume": "", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Syed Mudhasir, Y., Deepika, J., Sendhilkumar, S., and Ma- halakshmi, G. (2011). Near-duplicates detection and elimination based on web provenance for effective web search. International Journal on Internet & Distributed Computing Systems, 1(1).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kurian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Patil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Riesa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rudnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Vinyals", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hughes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Measuring similarity of large software systems based on source code correspondence", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Matsushita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kamiya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Inoue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Product Focused Software Process Improvement", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "530--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yamamoto, T., Matsushita, M., Kamiya, T., and Inoue, K. (2005). Measuring similarity of large software systems based on source code correspondence. In Frank Bomar- ius et al., editors, Product Focused Software Process Im- provement, pages 530-544, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Detecting duplicate posts in programming qa communities via latent semantics and association rules", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Sheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abebe", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 26th International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1221--1229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, W. E., Sheng, Q. Z., Lau, J. H., and Abebe, E. (2017). Detecting duplicate posts in programming qa communities via latent semantics and association rules. In Proceedings of the 26th International Conference on World Wide Web, pages 1221-1229. International World Wide Web Conferences Steering Committee.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "19--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urtasun, R., Torralba, A., and Fidler, S. (2015). Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pages 19-27.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "HeatMaps showing Macro F1 scores for simhash matching using different parameter values", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"text": "/anziamj.v44i0.707 comparison of time domain ... analyse centred methodology ..... [] 93949429 10.1007/JHEP01(2018)055 search for additional heavy neutral ... neutral bosons prime bosons ..... [153387874] 29502657 10.1051/0004-6361/201425252 constraining the properties of ... abridged latest cigale fitting .....", |
|
"content": "<table><tr><td>CORE ID</td><td>DOI</td><td>Title</td><td>Abstract</td><td>CORE ID of Duplicates</td></tr><tr><td>15080768</td><td colspan=\"4\">10.0000[52711245, 52427083,</td></tr><tr><td/><td/><td/><td/><td>52659917, 52672633]</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"text": "Example entries in our ground truth dataset", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"text": "", |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"text": "Evaluation scores obtained for best performing configuration of different methods", |
|
"content": "<table><tr><td>(a) Content Unit: Words</td></tr><tr><td>(b) Content Unit: Characters</td></tr></table>", |
|
"type_str": "table", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |