|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T14:58:43.823713Z" |
|
}, |
|
"title": "Knodle: Modular Weakly Supervised Learning with PyTorch", |
|
"authors": [ |
|
{ |
|
"first": "Anastasiia", |
|
"middle": [], |
|
"last": "Sedova", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Vienna Vienna", |
|
"location": { |
|
"country": "Austria" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stephan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Vienna Vienna", |
|
"location": { |
|
"country": "Austria" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Marina", |
|
"middle": [], |
|
"last": "Speranskaya", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Ludwig Maximilian University of Munich Munich", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Vienna Vienna", |
|
"location": { |
|
"country": "Austria" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Strategies for improving the training and prediction quality of weakly supervised machine learning models vary in how much they are tailored to a specific task or integrated with a specific model architecture. In this work, we introduce Knodle, a software framework that treats weak data annotations, deep learning models, and methods for improving weakly supervised training as separate, modular components. This modularization gives the training process access to fine-grained information such as data set characteristics, matches of heuristic rules, or elements of the deep learning model ultimately used for prediction. Hence, our framework can encompass a wide range of training methods for improving weak supervision, ranging from methods that only look at correlations of rules and output classes (independently of the machine learning model trained with the resulting labels), to those that harness the interplay of neural networks and weakly labeled data. We illustrate the benchmarking potential of the framework with a performance comparison of several reference implementations on a selection of datasets that are already available in Knodle.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Strategies for improving the training and prediction quality of weakly supervised machine learning models vary in how much they are tailored to a specific task or integrated with a specific model architecture. In this work, we introduce Knodle, a software framework that treats weak data annotations, deep learning models, and methods for improving weakly supervised training as separate, modular components. This modularization gives the training process access to fine-grained information such as data set characteristics, matches of heuristic rules, or elements of the deep learning model ultimately used for prediction. Hence, our framework can encompass a wide range of training methods for improving weak supervision, ranging from methods that only look at correlations of rules and output classes (independently of the machine learning model trained with the resulting labels), to those that harness the interplay of neural networks and weakly labeled data. We illustrate the benchmarking potential of the framework with a performance comparison of several reference implementations on a selection of datasets that are already available in Knodle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Most of today's machine learning success stories are built on top of huge labeled data sets. Creating and maintaining such data sources manually is a time-consuming, complicated and thus an expensive and error-prone process. Various research directions address the hunger for bigger and better datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the most popular approaches that has recently gained traction is weak supervision. The learning algorithm is confronted with labels which are easy to obtain but are not guaranteed to be correct, and as such often demand denoising. Such weak labels are created, for example, with the use of regular expressions, keyword lists or external databases. Typically, methods for improving weakly supervised learning (and their respective implementations) are tailored towards domain-specific tasks or integrated with a specific model architecture. Examples include the attention-over-instances architecture introduced for relation extraction (Lin et al., 2016) , an EM-based algorithm used for event extraction (Keith et al., 2017) or models of systematic label flips for named entity recognition (Hedderich et al., 2021) . Such diversity and specificity of approaches makes it difficult to compare or transfer them across tasks without extensive adjustments dictated by the implementation, the task or the data set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 641, |
|
"end": 659, |
|
"text": "(Lin et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 730, |
|
"text": "(Keith et al., 2017)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 796, |
|
"end": 820, |
|
"text": "(Hedderich et al., 2021)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We introduce Knodle: a framework for Knowledgesupervised Deep Learning, i.e weak supervision with neural networks. The framework provides a simple tensor-driven abstraction based on PyTorch allowing researchers to efficiently develop methods for improving weakly supervised machine learning models and try them interchangeably to find the one that works the best for a given task. Within this work, we refer to a denoising method as any method that helps to improve weakly supervised learning regardless the type of noise or bias and the exact level of denoising (weak labels, weak rules etc).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The following points summarize Knodle's main design goals:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Data abstraction. A tensor-driven data abstraction subsumes a large number of input variants and is applicable to a diverse range of tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Method independence. A decoupled implementation of weak supervision denoising methods and prediction models enables comparability and accounts for domain-specific inductive biases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 Accessibility. A high-level interface makes it easy to test existing methods, incorporate new ones and benchmark them against each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several denoising algorithms are already included in Knodle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also propose a new denoising algorithm, WSCrossWeigh, which extends CrossWeigh (Wang et al., 2019) , a method for detecting mistakes in crowd-sourced annotation, to the weak supervision setting. The experiments demonstrate that it outperforms other existing methods on the majority of dataset s.", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 101, |
|
"text": "(Wang et al., 2019)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "All implemented methods are tested on several datasets, also included in the Knodle ecosystem, and we discuss their performance. Each dataset exhibits different characteristics, such as the amount or the precision-recall balance of the used rules. Moreover, depending on the weakly labeled data set, methods for improving weak labels need to remove spurious matches in some cases, or generalize from them in others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is clear that such a diverse problem space should be paired with a rich pool of methods so that the most appropriate denoising method can be found for any task or dataset. Knodle allows to easily explore the spaces of weakly supervised learning settings and label improvement algorithms, and hopefully will facilitate a better understanding of the phenomena that are inherent to weakly supervised learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The framework is published as an opensource Python package knodle and available at https://github.com/knodle/knodle.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many strategies have been introduced to reduce the need for large amounts of manually labeled data. Among these are active learning (Sun and Grishman, 2012) , where automatically selected instances are manually annotated by experts, and semi-supervised learning (Agichtein and Gravano, 2000; Kozareva et al., 2008) , where a small annotated dataset is combined with a large unlabeled one. Fine-tuning pretrained language models such as BERT (?) shows good results if moderate to small amounts of annotations are available.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 156, |
|
"text": "(Sun and Grishman, 2012)", |
|
"ref_id": "BIBREF51" |
|
}, |
|
{ |
|
"start": 262, |
|
"end": 291, |
|
"text": "(Agichtein and Gravano, 2000;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 292, |
|
"end": 314, |
|
"text": "Kozareva et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In weak supervision, tedious expert work is replaced with easy to obtain, but potentially error-prone labels, that are usually derived from a set of heuristic rules. One of the most popular strategies of weakly supervised learning is distant supervision, which uses knowledge from existing data sources to annotate unlabeled data. The technique is used extensively for relation extraction (Craven and Kumlien, 1999; Mintz et al., 2009; ?; Riedel et al., 2013; Lin et al., 2016) , where various knowledge databases, such as WordNet (Snow et al., 2004) , Wikipedia (Wu and Weld, 2007) and Freebase (Mintz et al., 2009) , are used as annotation sources.", |
|
"cite_spans": [ |
|
{ |
|
"start": 389, |
|
"end": 415, |
|
"text": "(Craven and Kumlien, 1999;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 416, |
|
"end": 435, |
|
"text": "Mintz et al., 2009;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 436, |
|
"end": 438, |
|
"text": "?;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "Riedel et al., 2013;", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 477, |
|
"text": "Lin et al., 2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 550, |
|
"text": "(Snow et al., 2004)", |
|
"ref_id": "BIBREF48" |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 582, |
|
"text": "(Wu and Weld, 2007)", |
|
"ref_id": "BIBREF59" |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 616, |
|
"text": "(Mintz et al., 2009)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weak supervision", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "When using heuristic rules, it is not uncommon that one sample turns out to be annotated by multiple rules. The most straightforward approach to resolve such cases is majority voting, which is used in early weak supervision algorithms (Thomas et al., 2011) as well as in more recent experiments (Krasakis et al., 2019; Boland and Kr\u00fcger, 2019) . However, majority voting does not deal with the different types of noise introduced by weak supervision, and more noise-specific algorithms are necessary. For example, the noise produced by incomplete labels, which stems from the incompleteness of weak supervision sources and often leads to an increased amount of false negatives, is commonly reduced by data manipulations, e.g. enhancing the knowledge base (Xu et al., 2013) , a thorough construction of negative examples to balance the positive ones (Riedel et al., 2013) , or explicitly modelling missing knowledge base information with latent variables (Ritter et al., 2013) . The problem of noisy features, i.e. an increased amount of false positive labels stemming from overgeneralization, is often approached by using a relaxed distant supervision assumption (Riedel et al., 2010; Hoffmann et al., 2011) , by active learning with additional manual expertise (Sterckx et al., 2014) , with help of topic models (Yao et al., 2011; Roth and Klakow, 2013) , as well as by using a combination of multiple methods (Roth, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 256, |
|
"text": "(Thomas et al., 2011)", |
|
"ref_id": "BIBREF54" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 318, |
|
"text": "(Krasakis et al., 2019;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 319, |
|
"end": 343, |
|
"text": "Boland and Kr\u00fcger, 2019)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 772, |
|
"text": "(Xu et al., 2013)", |
|
"ref_id": "BIBREF60" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 870, |
|
"text": "(Riedel et al., 2013)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 954, |
|
"end": 975, |
|
"text": "(Ritter et al., 2013)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 1163, |
|
"end": 1184, |
|
"text": "(Riedel et al., 2010;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1185, |
|
"end": 1207, |
|
"text": "Hoffmann et al., 2011)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1262, |
|
"end": 1284, |
|
"text": "(Sterckx et al., 2014)", |
|
"ref_id": "BIBREF49" |
|
}, |
|
{ |
|
"start": 1313, |
|
"end": 1331, |
|
"text": "(Yao et al., 2011;", |
|
"ref_id": "BIBREF61" |
|
}, |
|
{ |
|
"start": 1332, |
|
"end": 1354, |
|
"text": "Roth and Klakow, 2013)", |
|
"ref_id": "BIBREF42" |
|
}, |
|
{ |
|
"start": 1411, |
|
"end": 1423, |
|
"text": "(Roth, 2014)", |
|
"ref_id": "BIBREF41" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weak supervision", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Apart from that, methods treat the identified potentially noisy samples differently. They are either kept for further training with reduced weights (Jat et al., 2018; He et al., 2020) , corrected (Shang, 2019) or eliminated (Qin et al., 2018) . Thus, denoising methods vary significantly depending on the data and task, what makes the creation of a platform for comparison crucial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 166, |
|
"text": "(Jat et al., 2018;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 183, |
|
"text": "He et al., 2020)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 196, |
|
"end": 209, |
|
"text": "(Shang, 2019)", |
|
"ref_id": "BIBREF44" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 242, |
|
"text": "(Qin et al., 2018)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weak supervision", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Structure learning assumes multiple weak labels per instance where each label is created by a so called labeling function. The goal is to learn a dependency structure within these labeling functions which motivates the term structure learning. Most labeling functions are generated by human intuitions, motivating correlation and dependence between labeling functions. The first algorithm was implemented in the software package Snorkel , which also implemented the data programming paradigm, allowing to programmatically create labeling functions. Subsequently improvements were made Varma et al., 2019) and variations, such as semi-supdervised learning (Chatterjee et al., 2019; Maheshwari et al., 2020) were introduced.", |
|
"cite_spans": [ |
|
{ |
|
"start": 585, |
|
"end": 604, |
|
"text": "Varma et al., 2019)", |
|
"ref_id": "BIBREF56" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 680, |
|
"text": "(Chatterjee et al., 2019;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 681, |
|
"end": 705, |
|
"text": "Maheshwari et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Structure Learning", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "A common idea to mitigate single noisy labels is to build an architecture which accounts for noisy data. There are different approaches that model noise-robustness by adapting the loss function (Patrini et al., 2017) . Examples include a generalization of cross-entropy and the mean absolute error (Zhang and Sabuncu, 2018) or the addition of a special noise layer to a neural network (Sukhbaatar et al., 2015) . Many approaches are based on noise assumptions, such as on the assumption of symmetric label noise (van Rooyen et al., 2015) . Another approach aims at finding and removing wrongly labeled samples from the training procedure. An example in this domain is given by the confidence learning framework CleanLab, which is based on the intuition that low-confidence predictions in cross-validation are more likely to be labeled wrongly (Northcutt et al., 2021) . Note that most of these methods were built with the assumption that there is one label corresponding to each instance, while Knodle makes use of several weak signals per instance.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 216, |
|
"text": "(Patrini et al., 2017)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 298, |
|
"end": 323, |
|
"text": "(Zhang and Sabuncu, 2018)", |
|
"ref_id": "BIBREF63" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 410, |
|
"text": "(Sukhbaatar et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 537, |
|
"text": "(van Rooyen et al., 2015)", |
|
"ref_id": "BIBREF40" |
|
}, |
|
{ |
|
"start": 843, |
|
"end": 867, |
|
"text": "(Northcutt et al., 2021)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Noise-aware learning", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Another solution to reduce the cost of manual data supervision by experts is crowdsourcing. In order to increase the supervision accuracy for a task, most crowdsourcing experiments rely on annotations by multiple people, and the final label is defined by majority voting (Kosinski et al., 2012) or measuring the inter-annotator agreement (Tratz and Hovy, 2010) . More sophisticated denoising strategies include anomaly detection (Eskin, 2000), annotator's reliability modelling (Dawid and Skene, 1979) , Bayesian approaches (Raykar and Yu, 2012) and generative models (Hovy et al., 2013) . Some mistakes can be identified by such methods. For example, mistakes consistently made by careful but biased people (Ipeirotis et al., 2010) , or errors introduced by spammers (Raykar and Yu, 2012).", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 294, |
|
"text": "(Kosinski et al., 2012)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 360, |
|
"text": "(Tratz and Hovy, 2010)", |
|
"ref_id": "BIBREF55" |
|
}, |
|
{ |
|
"start": 478, |
|
"end": 501, |
|
"text": "(Dawid and Skene, 1979)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 587, |
|
"text": "(Hovy et al., 2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 708, |
|
"end": 732, |
|
"text": "(Ipeirotis et al., 2010)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing annotations", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "As both, automatically and human labeled data, are subject to noise and structural errors, many algorithms can be used for both domains. For example, the MACE algorithm (Hovy et al., 2013) , initially proposed for improving noisy annotations from human annotators, was adapted to the setting of denoising automatically labeled data for named entity recognition (Rehbein and Ruppenhofer, 2017) . With the same motivation, we introduce WSCrossWeigh (see Section 4 for more details). We demonstrate the usefulness of the Knodle framework to transfer algorithms for improving crowd-sourced annotations to weak supervision problems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 188, |
|
"text": "(Hovy et al., 2013)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 361, |
|
"end": 392, |
|
"text": "(Rehbein and Ruppenhofer, 2017)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Crowdsourcing annotations", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Knodle is based upon the ideas of several software frameworks. On a low level, Knodle is built on top of PyTorch (Paszke et al., 2017) . As for design decisions, we followed several other high-level libraries that aim to ease the training and prediction experience. Namely, we drew inspiration from PyTorch lightning (Falcon, 2019), which in essence tries to remove the burdens of writing your own train loop, and Huggingface's Transformers library (Wolf et al., 2020) , which gives easy access to various transformer-based architectures in a fixed manner, so that they can be effortlessly interchanged in code.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 134, |
|
"text": "(Paszke et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 468, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Frameworks", |
|
"sec_num": "2.5" |
|
}, |
|
{ |
|
"text": "The Knodle architecture provides a layer of abstraction that allows integrated label improvement and model training with weakly supervised learning signals in PyTorch. On the one hand, since Knodle has access to the information which rules matched for each sample, it is not restricted to methods that denoise only weak labels, such as Cleanlab (Northcutt et al., 2021). On the other hand, the Knodle abstraction also provides access to input and learned representations, and thus does not restrict denoising methods to rely on rule match correlations alone (as Snorkel ). Moreover, access to the deep learning model enables the integration of denoising methods that use or manipulate the prediction model itself.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly supervised learning with Knodle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To the best of our knowledge, Knodle is the first framework to provide a modular architecture for interchangable application of a wide spectrum of denoising algorithms. For that reason we believe that it can become a testbed where different algorithms for improving the weakly supervised data are implemented and compared with each other to find the most fruitful task-to-denoising-method combination or to use it as a foundation for further studies. The framework follows two main design principles, outlined below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly supervised learning with Knodle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. Tensor-based representations of input data and weak label matches Similar to Pytorch models, where the data (input, labels) is already expected to be in tensor format, and the specific pre-processing that led to the tensor representation of the data is outside the scope of the deep learning model implementation, we choose to exclude the process of weak label generation from Knodle. Rather, we encode the information about weak labels in two tensors. One tensor contains information about which rules matched for each data instance, while another tensor describes the relationship between rules and output classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly supervised learning with Knodle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Formally, assume we have n samples, r rules and k classes. Rule matches are gathered in a binary matrix Z \u2208 {0,1} n\u00d7r , where Z ij = 1 if rule j matches sample i. The initial mapping from rules to the corresponding classes is given by another binary matrix T \u2208{0,1} r\u00d7k , T jk =1 if rule j is indicative of class k.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly supervised learning with Knodle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "This separation between one tensor that contains rule matches and another tensor that translates them to labels allows Knodle to access this fine-grained information during training for certain denoising algorithms. This is in contrast to other approaches that treat weak supervision as learning from a noisy heuristic label matrix Y heur = ZT without direct access to the individual rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weakly supervised learning with Knodle", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Knodle requires a standard PyTorch model for a given prediction task. It is defined independent of the weak supervision aspects, such as rule types or denoising method. Therefore the same PyTorch model definition can be used for direct or weakly supervised training, and the two settings can easily be compared. However, even though the prediction model is defined separately, the denoising methods may have access to it during training. For example, cross-validation schemes such as WSCrossWeigh (see Section 4) can use the PyTorch model definition for data reweighting or label correction. This is in contrast to approaches that modularize denoising and training by first adjusting label confidences by using correlations between rules only and then training a model with the adjusted labels (Takamatsu et al., 2012; . Furthermore, Knodle's design is much more flexible compared to approaches where denoising is so tightly integrated into the underlying prediction model architecture that it could not be changed (Sukhbaatar et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 794, |
|
"end": 818, |
|
"text": "(Takamatsu et al., 2012;", |
|
"ref_id": "BIBREF53" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1040, |
|
"text": "(Sukhbaatar et al., 2015)", |
|
"ref_id": "BIBREF50" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Separation of the prediction model from the weak supervision aspects.", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Different tasks need a different logic to handle data samples where no rule matched. These samples are traditionally called negative instances. Whether unlabeled instances should be used for training (as an additional OTHER class) depends on the task at hand and should be configurable. For example, in knowledge base population (Surdeanu, 2013) there is only a small number of relevant target relations, and it is important to confidently identify sentences that do not contain any of the target relations (requiring negative instances as examples for the OTHER class). However, in spam classification with only two classes (spam and not spam) there are rules covering both possible outcomes, and there is no need for unlabeled instances and filtering them out is reasonable. Current weak supervision frameworks provide only one of the two options: negative samples are either filtered out or included to the training dataset (Shu et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 329, |
|
"end": 345, |
|
"text": "(Surdeanu, 2013)", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 945, |
|
"text": "(Shu et al., 2020)", |
|
"ref_id": "BIBREF45" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of negative instances", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Knodle includes configurable functionality for handling such cases (allowing comparability of denoising methods across tasks with and without an OTHER class). From a technical point of view, there is a filter non labeled flag in a configuration file, which could be set to False if the negative instances should be filtered out. To make up for missing explicit annotations for negative samples, an additional other class parameter is defined. Automatically all samples without a matching rule are set to belong to \"other\" class. Hence, the exact other class id could be either provided by the user or determined automatically by Knodle. These types of configurations are well encapsulated, allowing the specific model to deal with either input. The amount of negative instances that should included in the training set can be defined specifically for each denoising algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Handling of negative instances", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Similar to the most popular deep learning frameworks, such as TensorFlow (Abadi et al., 2015) and PyTorch (Paszke et al., 2017) , we realise learning as a mapping from input tensor(s) to output tensor(s) guided by a loss function that measures the quality of the learned mapping. However, while the most common solution is to represent the training data by a design matrix X \u2208R n\u00d7d (n instances represented by d feature dimensions) and a label matrix Y \u2208 R n\u00d7k (k classes), input of Knodle are matrices X, Z and T described above. The heuristic labels themselves are calculated later during the weakly supervised learning using the information contained there. To ensure a seamless use, the weakly supervised algorithms need to be tightly integrated with automatic differentiation and optimization supported by PyTorch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 93, |
|
"text": "(Abadi et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 127, |
|
"text": "(Paszke et al., 2017)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The denoising and training procedures are realised within Trainer classes. During initialization, they receive data, a possibly pre-initialized or pretrained model, and a method-specific configuration, inheriting from Config containing information such as model training parameters, criterion, validation method, class weights, various options to handle cases where no rule matches discussed in 3.1 and others. The level of integration between denoising and training is different for each Trainer. Sometimes these procedures can be completely disentangled. For instance, the SnorkelTrainer firstly denoises the input rules with Snorkel and, secondly, trains the classification model on the purified labels. Other methods highly integrate denoising and training with each other. An example is given by the WSCrossWeighTrainer, where several models are trained in oder to calculate sample weights as part of the denoising procedure before the final classifier is trained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "While in standard deep learning frameworks training can be executed by calling model.train(X,Y), in Knodle the same functionality would be invoked with the following command (illustrates the Trainer with k-NN search, which we describe in Section 4):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "kNNAggregationTrainer(model, X, Z, T, config).train()", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The following code snippet shows an end-to-end process, starting from data loading, training and evaluation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1 import torch 2 from knodle.trainer.knn_aggregation import \\ 3 kNNAggregationTrainer, kNNConfig 4 5 # load data in Knodle format 6 X_train, Z, T, X_test, Y_test = load_data() 7 8 # define custom config (or use default) 9 config = kNNConfig(epochs=2, k=3) 10 11 # initialize trainer 12 trainer = kNNAggregationTrainer ( ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Details", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Knodle currently provides several out-of-the-box baselines and trainers, which we outline in the following section. All Trainer classes are compatible with any PyTorch model. As examples for PyTorch classifiers, Knodle provides code using logistic regression and HuggingFace's transformers (Wolf et al., 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 309, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Majority Voting Baseline. As a simple baseline, the rules are directly applied to the test data without any additional model training. If several rules match, the prediction is done based on the majority; ties are broken randomly. As was already mentioned in Section 2, it is one of the most basic approaches to denoise the data labeled by two or more rules or human annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Trainer without Denoising. The simplest trained model is the NoDenoisingTrainer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The majority vote is computed on the training data and used to train the given model. This is the most direct use of the rule matches for training a classifier. To cover cases where several rules match, this trainer can be configured to either use a one-hot encoding of the winning label from the majority vote or a distribution over labels (relative to the number of matching rules).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Trainer with kNN Denoising. This kNNAggregationTrainer includes the label denoising method with a simple geometric interpretation. The intuition behind it is that similar samples should be activated by the same rules which is allowed by a smoothness assumption on the target space. The trainer looks at the k most similar samples sorted by, for example, TF-IDF features combined with L 2 distance, and activates the rules matching the neighbors to create a denoised\u1e90. Importantly, Knodle allows separate features for the model training and the neighborhood activation. This method also provides a way to activate rules for initially unmatched samples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Trainer with Snorkel Denoising. Knodle provides a wrapper of the Snorkel system SnorkelTrainer which incorporates both generative and discriminative Snorkel steps. The generative step constitutes a denoising method in Knodle's terminology, while the discriminative step corresponds to a prediction model. The structure within labels and rules, in our notation P (Y,Z,T ), is learned in an unsupervised fashion by the generative model. Afterwards, the final discriminative model, i.e. the prediction model, is trained with weak labels provided by the generative model, following the general Knodle design. Both steps are conveniently provided in a single method call.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Trainer with Weak Supervision CrossWeigh Denoising. Finally, we implemented our own algorithm for noise correction in weakly supervised data. It is based on the CrossWeight method (Wang et al., 2019) and included to Knodle as WSCrossWeighTrainer. While the original CrossWeigh method was proposed for mistakes identification in crowdworkers annotations, we extend it for denoising the weakly supervised data as well. In WSCrossWeigh we adopted the same logic for estimating the reliability of weakly annotated data, but made some necessarily corrections specific to weakly supervised learning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 180, |
|
"end": 199, |
|
"text": "(Wang et al., 2019)", |
|
"ref_id": "BIBREF57" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The main intuition behind WSCrossWeigh is the following: if a labeling rule corresponds to a wrong class and, therefore, annotates many samples in the training set with a wrong label, a machine learning model is likely to learn the incorrect pattern and to make similar mistakes when labeling the test samples. However, if we take a sufficiently big portion of data with samples not labeled by this rule, train the model on it, and then classify the samples matched by the rule, the predictions will contradict the initial wrong labels, and help us to trace the misclassified samples and reduce their importance in final classifier training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As in the original CrossWeigh, the basic idea is similar to the k-fold cross-validation, where input data is split into k folds, each of which becomes, in turn, a test set, while the model is trained on the other folds. In WSCrossWeigh, however, the splitting is performed not randomly, but based on which rules match for the samples. Firstly, the rules are randomly split into K folds {r 1 ,...,r k } and, iteratively, each fold l is chosen to form a test set that is built from all samples matched this fold's rules. Other samples constitute a training set that is used for training the classification model. During the testing of the trained model on the hold-out fold samples, the predicted label\u0177 i for each test sample x i is compared to the label y i originally assigned to x i by weak supervision. If\u0177 i =y i , this is taken as an indication that the sample x i is likely to be potentially mislabeled, and its weights w x i is reduced by a value of an empirically estimated parameter . This procedure is repeated several times with different splits to detect misclassified samples more accurately.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The final classifier is trained on the whole reweighed training dataset. As a result, the more times the original y i label of data sample x i was suspected to be wrong, the smaller is its weight w x i , and, therefore, the smaller part it will play in the classifier training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Along with other denoising algorithms, WSCrossWeigh was tested on the datasets described in Section 5 and showed quite promising results: it outperforms all other algorithms on three out of four datasets (for more details please see Section 6).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trainers", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Apart from denoising methods, Knodle includes a few datasets from previous works in the Knodle-specific tensor format in order to demonstrate the abilities of the framework. their own peculiarities with respect to the respective Z and T matrices, that are worth investigating. The overview of dataset statistics is provided in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 334, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Spam Dataset. The first task uses the YouTube comments dataset (Alberto et al., 2015). Here, the task is to classify whether a text is relevant to the video or holds spam, such as advertisement. The dataset has a small size of both train and test sets. Thus, a single wrongly labeled instance might have quite a big impact on the learning algorithm. We use the preprocessed version by the Snorkel team (Snorkel, 2020b) . Among others, the rules were created based on keywords and regular expressions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 418, |
|
"text": "(Snorkel, 2020b)", |
|
"ref_id": "BIBREF47" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Spouse Dataset. This relation extraction dataset is based on the Signal Media One-Million News Articles Dataset (Corney et al., 2016) . The task is to decide whether a sentence holds a spouse relation or not. Again, the preprocessed version by the Snorkel team is used (Snorkel, 2020a) , so the results can be related to previous studies . The rules are created via a set of known spouse relationships from DBPedia (Lehmann et al., 2014) as well as keywords and encoded language patterns. The difficulty of the Spouse dataset is its skewness: over 90% of samples in the test set hold a no-spouse relation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 133, |
|
"text": "(Corney et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 285, |
|
"text": "(Snorkel, 2020a)", |
|
"ref_id": "BIBREF46" |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 437, |
|
"text": "(Lehmann et al., 2014)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "IMDb Dataset. The third dataset is based on the well-known IMDb dataset (?), which consists of short movie reviews. The task is to determine whether a review holds a positive or negative sentiment. Despite the training set has labels, we do not use them in our experiments, but handle this data in an unsupervised fashion. To create the Z and T matrices, we use positive and negative keyword lists (Hu and Liu, 2004) , with a total of 6800 keywords.", |
|
"cite_spans": [ |
|
{ |
|
"start": 398, |
|
"end": 416, |
|
"text": "(Hu and Liu, 2004)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "TAC-based Relation Extraction Dataset. Lastly, given the importance of distant supervision for relation extraction, we add a larger dataset with more relations (than just spouse). For development and test purposes the TACRED corpus annotated via crowdsourcing and human labeling from KBP (Zhang et al., 2017) is used. As human labels are not allowed in weak training, the training is performed not on the TACRED dataset, but on a weakly-supervised noisy corpus built on TAC KBP corpora (Surdeanu, 2013; Roth, 2014) , which was annotated with entity pairs extracted from Freebase (Google, 2014) with corresponding relations mapped to the 41 TAC relations. The amount of entity pairs per relation is limited to 10.000 and each entity pair is allowed to be mentioned in no more than 500 sentences. An important difference of this dataset to the other three is the presence of negative instances added to the dataset in equal proportion to the positive ones.", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 308, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF62" |
|
}, |
|
{ |
|
"start": 486, |
|
"end": 502, |
|
"text": "(Surdeanu, 2013;", |
|
"ref_id": "BIBREF52" |
|
}, |
|
{ |
|
"start": 503, |
|
"end": 514, |
|
"text": "Roth, 2014)", |
|
"ref_id": "BIBREF41" |
|
}, |
|
{ |
|
"start": 579, |
|
"end": 593, |
|
"text": "(Google, 2014)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The aim of Knodle is not to find the best denoising method in general. Rather, the goal is to find the method that improves weak labels most for a given task or dataset and its specific properties. Thus, Knodle supports experimentation to get a better understanding in which settings a certain method works well and when it does not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In all experiments, the DistilBert uncased model for English language (Sanh et al., 2019) provided by the HuggingFace 1 (Wolf et al., 2020) library is used as the prediction model. The optimization is performed with the AdamW optimizer (Loshchilov and Hutter, 2019 ) and a learning rate of 1e\u22124. We employ a cross-entropy loss accepting a probability distribution over all labels as reference input whenever the output of a denoising algorithm is a distribution over weak labels (e.g. kNNAggregationTrainer, SnorkelTrainer) . Reducing this representation to a single label (i.e. log-likelihood) would lead to a loss of weak signals, whereas a label distribution allows to exploit the information from Z and T to the fullest. Each model was trained for 2 epochs (unless stated otherwise), which was enough to receive a stable result. For the k-NN algorithm, nearest neighbors were found using the cosine similarity of TF-IDF features based on a dictionary of 3000 words, and the number of k neighbors is treated as a hyper-parameter. In our experiments, we used k = 2 except where otherwise noted. Hyperparameters for the WSCrossWeigh denoising algorithm are the number of folds the data is be split into, the number of partitions (that is, how many times the splitting for mistake estimation is done) and a weight-reducing rate (the value, by which the initial sample weights are reduced to each time the sample is predicted wrongly). These parameters are tuned for each dataset individually. The following best parameter values were found empirically: (folds=3, partitions=10 and =0.3) for the Spam dataset, (3, 2 and 0.3) for the Spouse dataset and (2, 25, 0.7) for the IMDb dataset. Apart from that, Knodle provides the opportunity to train the cross-validated sample weights with a model different from the final classifier. In our experiments, the weights were calculated using a Bidirectional LSTM with GloVe Embeddings (Pennington et al., 2014) , while the final training was performed with DistilBert using the same settings as in the experiments with other denoising methods. The only difference is the number of epochs on the TAC-based dataset: the best results were obtained with 1 DistilBert epoch.", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 89, |
|
"text": "(Sanh et al., 2019)", |
|
"ref_id": "BIBREF43" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 139, |
|
"text": "(Wolf et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 264, |
|
"text": "(Loshchilov and Hutter, 2019", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 479, |
|
"end": 523, |
|
"text": "(e.g. kNNAggregationTrainer, SnorkelTrainer)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1926, |
|
"end": 1951, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Details", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "An overview of the results is given in Table 2. In the Spam dataset, all denoising methods show an improvement over the simple majority vote baseline. The dataadaptive k-NN and WSCrossWeigh methods perform best in this setting. Snorkel and standard majority voting followed by DistilBert fine-tuning overfit to the noisy majority votes. This becomes obvious with the observation that Snorkel achieves a score of 0.93 with a simple logistic regression discriminative model. Interestingly, k-NN performs well which can serve as a proof for the reliability of neighboring labels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Compared to the Spam dataset, the Spouse dataset is much larger. As the task is to find sentences holding spouse relations, we relate all metrics to the is-spouse relation. Note that the non-spouse relation remains in this case completely disregarded. Furthermore, the class ratio equals 0.08 shows that is-spouse is the complicated class of interest. On average, 0.34 rules hit per instance, meaning that almost 70% of the data match no rule. In these cases, majority vote uses a random vote which oversamples the is-spouse relation, rendering a high recall but low precision. We found that the rule matches overrepresent the is-spouse class as they are closer to a class ratio of 0.5 than to the true class ratio of 0.9. Thus, the additional model training magnifies overfitting towards the is-spouse class which, again, is expressed by increased recall and lower precision. The only denoising system that generalizes is Snorkel. One possible explanation could be that it is the only method that provides explicit rule denoising.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "For IMDb, the majority vote shows that the rules have rather low quality on their own, but an additional trained model on top manages to generalize beyond the given labels. In contrast, denoising with the k-NN algorithm only aggravates the problems inherent to labels as the classifier's performance drops down to a random vote (50% accuracy). This behaviour can be explained by the high density of rule hits: on average, no less than 33 keywords match for each sentence, which means that already for k = 1 many neighbors are added and that the propagation of imprecise labelings overrules the expected benefits of k-NN. In general, there are cues that k-NN might useful in cases where the weak labels are already rather reliable but fail in cases where weak labels are too noisy. The Snorkel based denoising does not perform well on IMDb dataset as well, which can be explained by the lack of dependencies between the rules that the Snorkel system relies on. However, WSCrossWeigh appears to be very robust to these data characteristics, the large amount of rules seems to help tease out and mutually reinforce the data characteristics associated with a specific label in cross-validation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The distantly supervised TAC-based RE dataset turns out to be the most complicated dataset among all because of a larger size of samples n and a larger number of rules r. Due to its specificity, there are almost no rule matches (entity pairs from the seed KB) on the test set, implying that the simple majority baseline has scores close to 0. Training with DistilBert improves the result, however the performance remains considerably worse than for the data sets discussed above. On the contrary the WSCrossWeigh method that not directly denoise the rules, but downweigh the mislabeled data samples is still able to improve the results. Snorkel denoising could not be performed on this dataset on a machine with CPU frequency of 2.2GHz with 40 cores due to the immense amount of rules without the data manipulations we want to avoid (such as significantly reducing the number of rules). The computation of distances between almost 2 millions instances, which are necessary to determine the nearest neighbors, also turned out to be extremely memory-and time-consuming, explaining why k-NN algorithm was also not performed. Instead, we work around this by applying an approximated k-NN algorithm. In our experiments we used the Annoy library (Bernhardsson, 2015) and k = 3 parameter. The poor performance of approximated k-NN could be explained by a small average of rule hits in the TAC-based RE data set; the possible approximation losses are also not to be neglected. In contrast, the WSCrossWeigh method performs quite well. Our explanation is that WSCrossWeigh does not directly denoise the rules, but down-weighs samples it is less confident about. This makes this approach more robust in cases where the rules are very noisy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1240, |
|
"end": 1260, |
|
"text": "(Bernhardsson, 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "This work introduces the Knowledge-supervised Deep Learning framework Knodle. Knodle provides a unified interface to work with multiple weak labeling sources, so that they can be seamlessly integrated with the training of deep neural networks. This is achieved by a tensor-based input format and a intuitive separation of weak supervision aspects and model training. The framework facilitates experimentation that helps researchers to gain better insights into the correspondence between characteristics of weak supervision problems, and the effectiveness of methods for improving weakly supervised learning. From a practical perspective, Knodle can be used to compare different denoising methods and select the one that gives the best result for a specific task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Knodle's modular approach makes it easy to add new data sets and denoising algorithms. Adding functionality to Knodle is straightforward, and we do hope that it will encourage researchers to create their own algorithms to improve learning with weakly annotated data, and incorporate them into the Knodle framework.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://huggingface.co/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.diffbot.com/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This research was funded by the WWTF through the project \"Knowledge-infused Deep Learning for Natural Language Processing\" (WWTF Vienna Research Group VRG19-008), by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -RO 5127/2-1, and supported by a gift from Diffbot 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "TensorFlow: Large-scale machine learning on heterogeneous systems", |
|
"authors": [ |
|
{ |
|
"first": "Mart\u00edn", |
|
"middle": [], |
|
"last": "Abadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Agarwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Barham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Brevdo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhifeng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Craig", |
|
"middle": [], |
|
"last": "Citro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [], |
|
"last": "Davis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthieu", |
|
"middle": [], |
|
"last": "Devin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanjay", |
|
"middle": [], |
|
"last": "Ghemawat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Goodfellow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Harp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Irving", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Isard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yangqing", |
|
"middle": [], |
|
"last": "Jia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rafal", |
|
"middle": [], |
|
"last": "Jozefowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lukasz", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manjunath", |
|
"middle": [], |
|
"last": "Kudlur", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Josh Levenberg, Dandelion Man\u00e9", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fer- nanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Snowball: Extracting relations from large plain-text collections", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Agichtein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luis", |
|
"middle": [], |
|
"last": "Gravano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the Fifth ACM Conference on Digital Libraries, DL '00", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "85--94", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/336597.336644" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Digital Libraries, DL '00, page 85-94, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "TubeSpam: Comment Spam Filtering on YouTube", |
|
"authors": [ |
|
{ |
|
"first": "J V", |
|
"middle": [], |
|
"last": "T C Alberto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T A", |
|
"middle": [], |
|
"last": "Lochter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Almeida", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "138--143", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/ICMLA.2015.37" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "T C Alberto, J V Lochter, and T A Almeida. 2015. TubeSpam: Comment Spam Filtering on YouTube. In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), pages 138-143.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning the Structure of Generative Models without Labeled Data", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [ |
|
"Dawei" |
|
], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen H Bach, Bryan Dawei He, Alexander Ratner, and Christopher R\u00e9. 2017. Learning the Structure of Gener- ative Models without Labeled Data. CoRR, abs/1703.0.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Annoy on github", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Bernhardsson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Bernhardsson. 2015. Annoy on github. Last accessed 23 April 2021.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distant supervision for silver label generation of software mentions in social scientific publications", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Boland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kr\u00fcger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "BIRNDL@SIGIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K. Boland and F. Kr\u00fcger. 2019. Distant supervision for silver label generation of software mentions in social scientific publications. In BIRNDL@SIGIR.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Data Programming using Continuous and Quality-Guided Labeling Functions", |
|
"authors": [ |
|
{ |
|
"first": "Oishik", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ganesh", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sunita", |
|
"middle": [], |
|
"last": "Sarawagi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oishik Chatterjee, Ganesh Ramakrishnan, and Sunita Sarawagi. 2019. Data Programming using Continuous and Quality-Guided Labeling Functions. CoRR, abs/1911.0.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "What do a million news articles look like?", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Corney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Albakour", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miguel", |
|
"middle": [], |
|
"last": "Martinez-Alvarez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samir", |
|
"middle": [], |
|
"last": "Moussa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "NewsIR@ECIR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Corney, M. Albakour, Miguel Martinez-Alvarez, and Samir Moussa. 2016. What do a million news articles look like? In NewsIR@ECIR.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Constructing biological knowledge bases by extracting information from text sources", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Craven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johan", |
|
"middle": [], |
|
"last": "Kumlien", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology, page 77-86. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Maximum likelihood estimation of observer error-rates using the em algorithm", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Dawid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Skene", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1979, |
|
"venue": "Journal of the Royal Statistical Society. Series C (Applied Statistics)", |
|
"volume": "28", |
|
"issue": "1", |
|
"pages": "20--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. P. Dawid and A. M. Skene. 1979. Maximum likeli- hood estimation of observer error-rates using the em algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20-28.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Detecting errors within a corpus using anomaly detection", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Eleazar Eskin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "1st Meeting of the North American Chapter", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eleazar Eskin. 2000. Detecting errors within a corpus using anomaly detection. In 1st Meeting of the North American Chapter of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Pytorch lightning. GitHub", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wa Falcon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "WA Falcon. 2019. Pytorch lightning. GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning Cited by, 3.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Freebase data dumps", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Google", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Google. 2014. Freebase data dumps. https: //developers.google.com/freebase/data.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Improving neural relation extraction with positive and unlabeled learning", |
|
"authors": [ |
|
{ |
|
"first": "Zhengqiu", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenliang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuyi", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guanchun", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "7927--7934", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1609/aaai.v34i05.6300" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengqiu He, Wenliang Chen, Yuyi Wang, Wei Zhang, Guanchun Wang, and Min Zhang. 2020. Improving neural relation extraction with positive and unlabeled learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34:7927-7934.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Analysing the noise model error for realistic noisy label data", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Hedderich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dawei", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael A. Hedderich, Dawei Zhu, and Dietrich Klakow. 2021. Analysing the noise model error for realistic noisy label data.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Knowledgebased weak supervision for information extraction of overlapping relations", |
|
"authors": [ |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Congle", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "541--550", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541-550, Portland, Oregon, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning whom to trust with MACE", |
|
"authors": [ |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1120--1130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Mining and Summarizing Customer Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Minqing", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--177", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1014052.1014073" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and Sum- marizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Quality management on amazon mechanical turk", |
|
"authors": [ |
|
{ |
|
"first": "Panos", |
|
"middle": [], |
|
"last": "Ipeirotis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Foster", |
|
"middle": [], |
|
"last": "Provost", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jing", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACM SIGKDD Workshop on Human Computation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1837885.1837906" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Panos Ipeirotis, Foster Provost, and Jing Wang. 2010. Quality management on amazon mechanical turk. In:Proceedings of the ACM SIGKDD Workshop on Human Computation.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Improving distantly supervised relation extraction using word and entity based attention", |
|
"authors": [ |
|
{ |
|
"first": "Sharmistha", |
|
"middle": [], |
|
"last": "Jat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siddhesh", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Partha", |
|
"middle": [], |
|
"last": "Talukdar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharmistha Jat, Siddhesh Khandelwal, and Partha Taluk- dar. 2018. Improving distantly supervised relation extraction using word and entity based attention.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Identifying civilians killed by police with distantly supervised entity-event extraction", |
|
"authors": [ |
|
{ |
|
"first": "Katherine", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Keith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abram", |
|
"middle": [], |
|
"last": "Handler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Pinkham", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cara", |
|
"middle": [], |
|
"last": "Magliozzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Mcduffie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brendan O'", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katherine A. Keith, Abram Handler, Michael Pinkham, Cara Magliozzi, Joshua McDuffie, and Brendan O'Connor. 2017. Identifying civilians killed by police with distantly supervised entity-event extraction. CoRR, abs/1707.07086.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Crowd iq: Measuring the intelligence of crowdsourcing platforms", |
|
"authors": [ |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Kosinski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Bachrach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gjergji", |
|
"middle": [], |
|
"last": "Kasneci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jurgen", |
|
"middle": [], |
|
"last": "Van-Gael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thore", |
|
"middle": [], |
|
"last": "Graepel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 3rd Annual ACM Web Science Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2380718.2380739" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michal Kosinski, Yoram Bachrach, Gjergji Kasneci, Jurgen Van-Gael, and Thore Graepel. 2012. Crowd iq: Measuring the intelligence of crowdsourcing platforms. Proceedings of the 3rd Annual ACM Web Science Conference, WebSci'12.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Semantic class learning from the web with hyponym pattern linkage graphs", |
|
"authors": [ |
|
{ |
|
"first": "Zornitsa", |
|
"middle": [], |
|
"last": "Kozareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1048--1056", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of ACL-08: HLT, pages 1048-1056, Columbus, Ohio. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Semi-supervised ensemble learning with weak supervision for biomedical relationship extraction", |
|
"authors": [ |
|
{ |
|
"first": "Antonios", |
|
"middle": [], |
|
"last": "Minas Krasakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Kanoulas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Tsatsaronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "AKBC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonios Minas Krasakis, E. Kanoulas, and G. Tsat- saronis. 2019. Semi-supervised ensemble learning with weak supervision for biomedical relationship extraction. In AKBC.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Dbpedia -a large-scale, multilingual knowledge base extracted from wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Isele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Jakob", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Jentzsch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dimitris", |
|
"middle": [], |
|
"last": "Kontokostas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pablo", |
|
"middle": [], |
|
"last": "Mendes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Hellmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Morsey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Van Kleef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S\u00f6ren", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Semantic Web Journal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3233/SW-140134" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S\u00f6ren Auer, and Christian Bizer. 2014. Dbpedia -a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, 6.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Neural relation extraction with selective attention over instances", |
|
"authors": [ |
|
{ |
|
"first": "Yankai", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiqi", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Huanbo", |
|
"middle": [], |
|
"last": "Luan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2124--2133", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1200" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2124-2133, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Decoupled Weight Decay Regularization", |
|
"authors": [ |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Loshchilov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Hutter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled Weight Decay Regularization.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Rishabh Iyer, and Ganesh Ramakrishnan. 2020. Data Programming using Semi-Supervision and Subset Selection", |
|
"authors": [ |
|
{ |
|
"first": "Ayush", |
|
"middle": [], |
|
"last": "Maheshwari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oishik", |
|
"middle": [], |
|
"last": "Chatterjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnateja", |
|
"middle": [], |
|
"last": "Killamsetty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ayush Maheshwari, Oishik Chatterjee, KrishnaTeja Killamsetty, Rishabh Iyer, and Ganesh Ramakrishnan. 2020. Data Programming using Semi-Supervision and Subset Selection.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Distant supervision for relation extraction without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Mintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1003--1011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. 2009. Distant supervision for relation extraction with- out labeled data. In Proceedings of the Joint Confer- ence of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Suntec, Singapore. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Confident Learning: Estimating Uncertainty in Dataset Labels", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Curtis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Northcutt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isaac", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Curtis G Northcutt, Lu Jiang, and Isaac L Chuang. 2021. Confident Learning: Estimating Uncertainty in Dataset Labels.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Automatic differentiation in pytorch. NIPS Workshop", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. NIPS Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach", |
|
"authors": [ |
|
{ |
|
"first": "Giorgio", |
|
"middle": [], |
|
"last": "Patrini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Rozza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Menon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Nock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lizhen", |
|
"middle": [], |
|
"last": "Qu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. 2017. Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Robust distant supervision relation extraction via deep reinforcement learning", |
|
"authors": [ |
|
{ |
|
"first": "Pengda", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiran", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2137--2147", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1199" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pengda Qin, Weiran Xu, and William Yang Wang. 2018. Robust distant supervision relation extraction via deep reinforcement learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2137-2147, Melbourne, Australia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Snorkel: Rapid Training Data Creation with Weak Supervision", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Stephen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [ |
|
"Alan" |
|
], |
|
"last": "Henry R Ehrenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sen", |
|
"middle": [], |
|
"last": "Fries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Ratner, Stephen H Bach, Henry R Ehrenberg, Jason Alan Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid Training Data Creation with Weak Supervision. CoRR, abs/1711.1.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Eliminating spammers and ranking annotators for crowdsourced labeling tasks", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Vikas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shipeng", |
|
"middle": [], |
|
"last": "Raykar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "13", |
|
"issue": "16", |
|
"pages": "491--518", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vikas C. Raykar and Shipeng Yu. 2012. Eliminating spammers and ranking annotators for crowdsourced labeling tasks. Journal of Machine Learning Research, 13(16):491-518.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Detecting annotation noise in automatically labelled data", |
|
"authors": [ |
|
{ |
|
"first": "Ines", |
|
"middle": [], |
|
"last": "Rehbein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Ruppenhofer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1160--1170", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1107" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ines Rehbein and Josef Ruppenhofer. 2017. Detecting annotation noise in automatically labelled data. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1160-1170, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Modeling relations and their mentions without labeled text", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part III, ECML PKDD'10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "148--163", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Proceedings of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part III, ECML PKDD'10, page 148-163, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "Relation extraction with matrix factorization and universal schemas", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Marlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74- 84, Atlanta, Georgia. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Modeling missing data in distant supervision for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mausam", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "367--378", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00234" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ritter, Luke Zettlemoyer, Mausam, and Oren Etzioni. 2013. Modeling missing data in distant supervision for information extraction. Transactions of the Association for Computational Linguistics, 1:367-378.", |
|
"links": null |
|
}, |
|
"BIBREF40": { |
|
"ref_id": "b40", |
|
"title": "Learning with Symmetric Label Noise: The Importance of Being Unhinged", |
|
"authors": [ |
|
{ |
|
"first": "Aditya", |
|
"middle": [ |
|
"Krishna" |
|
], |
|
"last": "Brendan Van Rooyen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert C", |
|
"middle": [], |
|
"last": "Menon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Williamson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan van Rooyen, Aditya Krishna Menon, and Robert C Williamson. 2015. Learning with Symmetric Label Noise: The Importance of Being Unhinged.", |
|
"links": null |
|
}, |
|
"BIBREF41": { |
|
"ref_id": "b41", |
|
"title": "Effective distant supervision for end-to-end knowledge base population systems", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.22028/D291-26592" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Roth. 2014. Effective distant supervision for end-to-end knowledge base population systems. Ph.D. thesis, Saarland University.", |
|
"links": null |
|
}, |
|
"BIBREF42": { |
|
"ref_id": "b42", |
|
"title": "Feature-based models for improving the quality of noisy training data for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dietrich", |
|
"middle": [], |
|
"last": "Klakow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, CIKM '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1181--1184", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2505515.2507850" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Roth and Dietrich Klakow. 2013. Feature-based models for improving the quality of noisy training data for relation extraction. In Proceedings of the 22nd ACM International Conference on Information and Knowledge Management, CIKM '13, page 1181-1184, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF43": { |
|
"ref_id": "b43", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.", |
|
"links": null |
|
}, |
|
"BIBREF44": { |
|
"ref_id": "b44", |
|
"title": "Are noisy sentences useless for distant supervised relation extraction?", |
|
"authors": [ |
|
{ |
|
"first": "Yuming", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuming Shang. 2019. Are noisy sentences useless for distant supervised relation extraction?", |
|
"links": null |
|
}, |
|
"BIBREF45": { |
|
"ref_id": "b45", |
|
"title": "Learning with weak supervision for email intent detection", |
|
"authors": [ |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Shu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Subhabrata", |
|
"middle": [], |
|
"last": "Mukherjee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guoqing", |
|
"middle": [], |
|
"last": "Zheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Hassan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milad", |
|
"middle": [], |
|
"last": "Shokouhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Susan", |
|
"middle": [], |
|
"last": "Dumais", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1051--1060", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3397271.3401121" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Shu, Subhabrata Mukherjee, Guoqing Zheng, Ahmed Hassan, Milad Shokouhi, and Susan Dumais. 2020. Learning with weak supervision for email intent detection. pages 1051-1060.", |
|
"links": null |
|
}, |
|
"BIBREF46": { |
|
"ref_id": "b46", |
|
"title": "Detecting spouse mentions in sentences. Last accessed 25", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Snorkel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Snorkel. 2020a. Detecting spouse mentions in sentences. Last accessed 25 February 2021.", |
|
"links": null |
|
}, |
|
"BIBREF47": { |
|
"ref_id": "b47", |
|
"title": "Snorkel intro tutorial: Data labeling. Last accessed 25", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Snorkel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2021, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Snorkel. 2020b. Snorkel intro tutorial: Data labeling. Last accessed 25 February 2021.", |
|
"links": null |
|
}, |
|
"BIBREF48": { |
|
"ref_id": "b48", |
|
"title": "Learning syntactic patterns for automatic hypernym discovery", |
|
"authors": [ |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "NIPS", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rion Snow, Daniel Jurafsky, and Andrew Ng. 2004. Learning syntactic patterns for automatic hypernym discovery. In NIPS, volume 17.", |
|
"links": null |
|
}, |
|
"BIBREF49": { |
|
"ref_id": "b49", |
|
"title": "Using active learning and semantic clustering for noise reduction in distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "Sterckx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Demeester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes", |
|
"middle": [], |
|
"last": "Deleu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Develder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lucas Sterckx, Thomas Demeester, Johannes Deleu, and Chris Develder. 2014. Using active learning and semantic clustering for noise reduction in distant supervision. In NIPS 2014.", |
|
"links": null |
|
}, |
|
"BIBREF50": { |
|
"ref_id": "b50", |
|
"title": "Training Convolutional Networks with Noisy Labels", |
|
"authors": [ |
|
{ |
|
"first": "Sainbayar", |
|
"middle": [], |
|
"last": "Sukhbaatar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bruna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manohar", |
|
"middle": [], |
|
"last": "Paluri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lubomir", |
|
"middle": [], |
|
"last": "Bourdev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rob", |
|
"middle": [], |
|
"last": "Fergus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. 2015. Training Convolutional Networks with Noisy Labels.", |
|
"links": null |
|
}, |
|
"BIBREF51": { |
|
"ref_id": "b51", |
|
"title": "Active learning for relation type extension with local and global data views", |
|
"authors": [ |
|
{ |
|
"first": "Ang", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management, CIKM '12", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1112", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2396761.2398409" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ang Sun and Ralph Grishman. 2012. Active learning for relation type extension with local and global data views. In Proceedings of the 21st ACM International Conference on Information and Knowledge Manage- ment, CIKM '12, page 1105-1112, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF52": { |
|
"ref_id": "b52", |
|
"title": "Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. Theory and Applications of Categories", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Surdeanu. 2013. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. Theory and Applications of Categories.", |
|
"links": null |
|
}, |
|
"BIBREF53": { |
|
"ref_id": "b53", |
|
"title": "Reducing wrong labels in distant supervision for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Shingo", |
|
"middle": [], |
|
"last": "Takamatsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Issei", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "721--729", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shingo Takamatsu, Issei Sato, and Hiroshi Nakagawa. 2012. Reducing wrong labels in distant supervision for relation extraction. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 721-729, Jeju Island, Korea. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF54": { |
|
"ref_id": "b54", |
|
"title": "Learning protein-protein interaction extraction using distant supervision", |
|
"authors": [ |
|
{ |
|
"first": "Philippe", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ill\u00e9s", |
|
"middle": [], |
|
"last": "Solt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roman", |
|
"middle": [], |
|
"last": "Klinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulf", |
|
"middle": [], |
|
"last": "Leser", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philippe Thomas, Ill\u00e9s Solt, Roman Klinger, and Ulf Leser. 2011. Learning protein-protein interaction extraction using distant supervision. In Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing, pages 25-32, Hissar, Bulgaria. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF55": { |
|
"ref_id": "b55", |
|
"title": "A taxonomy, dataset, and classifier for automatic noun compound interpretation", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Tratz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "678--687", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Tratz and Eduard Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 678-687.", |
|
"links": null |
|
}, |
|
"BIBREF56": { |
|
"ref_id": "b56", |
|
"title": "Learning Dependency Structures for Weak Supervision Models", |
|
"authors": [ |
|
{ |
|
"first": "Paroma", |
|
"middle": [], |
|
"last": "Varma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederic", |
|
"middle": [], |
|
"last": "Sala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Ratner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "R\u00e9", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paroma Varma, Frederic Sala, Ann He, Alexander Ratner, and Christopher R\u00e9. 2019. Learning Dependency Structures for Weak Supervision Models.", |
|
"links": null |
|
}, |
|
"BIBREF57": { |
|
"ref_id": "b57", |
|
"title": "CrossWeigh: Training named entity tagger from imperfect annotations", |
|
"authors": [ |
|
{ |
|
"first": "Zihan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingbo", |
|
"middle": [], |
|
"last": "Shang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lihao", |
|
"middle": [], |
|
"last": "Lu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiawei", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5154--5163", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1519" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zihan Wang, Jingbo Shang, Liyuan Liu, Lihao Lu, Jiacheng Liu, and Jiawei Han. 2019. CrossWeigh: Training named entity tagger from imperfect anno- tations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5154-5163, Hong Kong, China. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF58": { |
|
"ref_id": "b58", |
|
"title": "Mariama Drame, Quentin Lhoest, and Alexander M Rush. 2020. HuggingFace's Transformers: State-of-the-art Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R\u00e9mi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joe", |
|
"middle": [], |
|
"last": "Davison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Shleifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Patrick Von Platen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yacine", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Jernite", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Canwen", |
|
"middle": [], |
|
"last": "Plu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Teven", |
|
"middle": [ |
|
"Le" |
|
], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sylvain", |
|
"middle": [], |
|
"last": "Scao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gugger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M Rush. 2020. HuggingFace's Transform- ers: State-of-the-art Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF59": { |
|
"ref_id": "b59", |
|
"title": "Autonomously semantifying wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "CIKM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying wikipedia. In CIKM, page 41-50. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF60": { |
|
"ref_id": "b60", |
|
"title": "Filling knowledge base gaps for distant supervision of relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Grishman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "665--670", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei Xu, Raphael Hoffmann, Le Zhao, and Ralph Grish- man. 2013. Filling knowledge base gaps for distant supervision of relation extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 665-670, Sofia, Bulgaria. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF61": { |
|
"ref_id": "b61", |
|
"title": "Structured relation discovery using generative models", |
|
"authors": [ |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1456--1466", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1456-1466, Edinburgh, Scotland, UK. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF62": { |
|
"ref_id": "b62", |
|
"title": "Position-aware attention and supervised data improve slot filling", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 35-45.", |
|
"links": null |
|
}, |
|
"BIBREF63": { |
|
"ref_id": "b63", |
|
"title": "Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels", |
|
"authors": [ |
|
{ |
|
"first": "Zhilu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Sabuncu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhilu Zhang and Mert R Sabuncu. 2018. General- ized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "The figure gives an overview of our system. (a) represents the preprocessed input, given as tensors.(b) resembles the internals of Knodle. The Trainer classes introduced in Section 3.2 handle transformation, denoising and model training. Note that these three steps could be performed subsequently or subsumed in a single training step.Then, (c) shows the output, a trained PyTorch model.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"text": "Summary of data statistics. The average rule hits are computed on the train set. Class ratio describes the amount of positive samples in the test set for binary classification datasets, i.e. data skewedness.", |
|
"html": null, |
|
"content": "<table><tr><td>All datasets are rather simple, but have</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"text": "Results of the classifier training with different denoising methods on the test sets of datasets included in Knodle.", |
|
"html": null, |
|
"content": "<table><tr><td>The neighbors were searched with Approximate Nearest Neighbors (Bernhardsson, 2015) because of computation</td></tr><tr><td>complexity of k-NN search.</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |