ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:20.233785Z"
},
"title": "Exploring Inspiration Sets in a Data Programming Pipeline for Product Moderation",
"authors": [
{
"first": "Justine",
"middle": [],
"last": "Winkler",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"country": "Netherlands"
}
},
"email": ""
},
{
"first": "Simon",
"middle": [],
"last": "Brugman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"country": "Netherlands"
}
},
"email": ""
},
{
"first": "Bas",
"middle": [],
"last": "Van Berkel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"country": "Netherlands"
}
},
"email": ""
},
{
"first": "Martha",
"middle": [],
"last": "Larson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Radboud University",
"location": {
"country": "Netherlands"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We carry out a case study on the use of data programming to create data to train classifiers used for product moderation on a large e-commerce platform. Data programming is a recently-introduced technique that uses human-defined rules to generate training data sets without tedious item-by-item hand labeling. Our study investigates methods for allowing product moderators to quickly modify the rules given their knowledge of the domain and, especially, of textual item descriptions. Our results show promise that moderators can use this approach to steer the training data, making possible fast and close control of classifiers that detect policy violations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We carry out a case study on the use of data programming to create data to train classifiers used for product moderation on a large e-commerce platform. Data programming is a recently-introduced technique that uses human-defined rules to generate training data sets without tedious item-by-item hand labeling. Our study investigates methods for allowing product moderators to quickly modify the rules given their knowledge of the domain and, especially, of textual item descriptions. Our results show promise that moderators can use this approach to steer the training data, making possible fast and close control of classifiers that detect policy violations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text classifiers play an important role in filtering inappropriate products on e-commerce platforms. Product moderators are dependent on classifiers that have been trained on up-to-date labeled data in order to keep pace with policy changes and new instances of inappropriate products. For example, Amazon had to take fast action to remove offensive T-shirts during the 2020 US election (Bryant, 2020) and overpriced items and fake cures during the COVID-19 pandemic (BBC, 2020) . In this paper, we carry out a case study at a large e-commerce platform. We investigate an approach that allows moderators to rapidly steer the creation of labeled training data, thereby enabling close control of moderation classifiers.",
"cite_spans": [
{
"start": 387,
"end": 401,
"text": "(Bryant, 2020)",
"ref_id": "BIBREF4"
},
{
"start": 467,
"end": 478,
"text": "(BBC, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach makes use of a recently-introduced technique called data programming , which generates classifier training data on the basis of rules that have been specified by domain experts (platform moderators). Data programming eliminates the need to individually hand-label training data points. We propose a feedback loop that selects subsets of data, called inspiration sets, that are used by moderators as the basis for updating an initial or existing set of rules. We investigate whether inspiration sets can be selected in an unsupervised manner, i.e., without ground truth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contribution of our case study is insight into how to support moderators in updating the rules used by a data programming pipeline in a realworld use scenario requiring fast control (i.e., imposing time constraints). Our study is carried out in collaboration with professional moderators at bol.com, a large European e-commerce company. In contrast to our work, most papers on product moderation, such as Arnold et al. (2016) , do not obviously take an inside perspective. Most previous studies of data programming, such as Ehrenberg et al. 2016, have looked at user control, but not at fast control, i.e., the ability to update rules quickly in order to steer the training data.",
"cite_spans": [
{
"start": 409,
"end": 429,
"text": "Arnold et al. (2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because of the sensitive nature of the work of the platform moderators, our case study is written with a relatively high level of abstraction. We cannot reveal the exact statistics of inappropriate items on the platform. The rules formulated by the moderators are largely based on keywords occurring in the text of product descriptions, but it is not possible to state them exactly. Nonetheless, we find that we are able to report enough information to reveal the potential of inspiration sets for fast control of inappropriate products on e-commerce platforms. This paper is based on a collaborative project with bol.com. Further analysis and experimental results are available in the resulting thesis (Winkler, 2020) .",
"cite_spans": [
{
"start": 703,
"end": 718,
"text": "(Winkler, 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most work on product moderation (Martin et al., 2018; Xu et al., 2019; Mackey and Kalyanam, 2017) focuses on products sold on social media. In contrast, we study an e-commerce platform from the inside. Like social media moderation, we face the challenge of lexical variation of keywords, cf. Chancellor et al. (2016) .",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "(Martin et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 54,
"end": 70,
"text": "Xu et al., 2019;",
"ref_id": "BIBREF27"
},
{
"start": 71,
"end": 97,
"text": "Mackey and Kalyanam, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 292,
"end": 316,
"text": "Chancellor et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our study is related to work investigating applications of data programming to a specific problem. Such work includes examples from the medical domain (Callahan et al., 2019; Dutta and Saha, 2019; Dutta et al., 2020; Saab et al., 2019 Saab et al., , 2020 , multi-task learning (Ratner et al., 2018 (Ratner et al., , 2019a , information extraction (Ehrenberg et al., 2016) , and learning discourse structure (Badene et al., 2019) . Like our work, such work often adjusts the Snorkel framework (Ratner et al., 2017) for the task at hand. Previous work has proposed a variety of methods for giving users (who are in our case the product moderators) control over classifiers by making use of a pipeline that allows them to provide feedback about training data labels and classification results. In WeSAL (Nashaat et al., 2018 (Nashaat et al., , 2020 user feedback improves the labels that sets of rules assign to data points. In contrast, our focus is on feedback that allows moderators to improve the rules directly. In this respect, our work is related to DDLite (Ehrenberg et al., 2016), which was, to our knowledge, the first to discuss how rules in a data programming pipeline can be improved using sampled data as feedback. Socratic Learning (Varma et al., 2017a,b) considered the issue of users implicitly focusing on subsets of data when they formulate rules, limiting the ability of the data programming pipeline to generalize to data outside of these subsets.",
"cite_spans": [
{
"start": 151,
"end": 174,
"text": "(Callahan et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 175,
"end": 196,
"text": "Dutta and Saha, 2019;",
"ref_id": "BIBREF8"
},
{
"start": 197,
"end": 216,
"text": "Dutta et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 217,
"end": 234,
"text": "Saab et al., 2019",
"ref_id": "BIBREF21"
},
{
"start": 235,
"end": 254,
"text": "Saab et al., , 2020",
"ref_id": "BIBREF22"
},
{
"start": 277,
"end": 297,
"text": "(Ratner et al., 2018",
"ref_id": "BIBREF18"
},
{
"start": 298,
"end": 321,
"text": "(Ratner et al., , 2019a",
"ref_id": "BIBREF19"
},
{
"start": 347,
"end": 371,
"text": "(Ehrenberg et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 407,
"end": 428,
"text": "(Badene et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 492,
"end": 513,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 800,
"end": 821,
"text": "(Nashaat et al., 2018",
"ref_id": "BIBREF15"
},
{
"start": 822,
"end": 845,
"text": "(Nashaat et al., , 2020",
"ref_id": "BIBREF14"
},
{
"start": 1244,
"end": 1267,
"text": "(Varma et al., 2017a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We are working under time-constrained conditions. There are two constraints. First, our moderators are given a limited amount of time to formulate the initial rules. They formulate the rules themselves based solely on their domain expertise and experience, which allows them to work quickly. In contrast, in work such as Ehrenberg et al. 2016and Ratner et al. (2018) , users consult labeled data to formulate the initial rules. Second, our moderators have limited time to revise the initial rules. In this step, they consult data in the form of inspiration sets. Wu et al. (2018) investigate time constraints, but focuses on supervised feedback, whereas we also investigate unsupervised approaches.",
"cite_spans": [
{
"start": 346,
"end": 366,
"text": "Ratner et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 563,
"end": 579,
"text": "Wu et al. (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We consider the work of Cohen-Wang et al. (2019) to be the existing work closest to ours. This work investigates intelligent ways of sampling data points for rule improvement. Our inspiration sets are based on these strategies. A key difference is that Cohen-Wang et al. 2019 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we describe the data programming pipeline and also our experiment with inspiration sets, which investigates the potential for fast control of training data for moderation classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "The platform policy of the company we study has five dimensions. It excludes products (1) that are illegal (2) whose production or consumption causes harm (3) that do not match customer expectations (4) that technically fall outside of what the platform can handle (5) that project hate or discrimination. Each dimension contains concrete categories. For example, under (2) there is a category (\"single-use plastic\"), which contains single-use plastic cups, straws, and cotton swabs that are excluded based on European guidelines. Each of the categories is monitored independently using a classifier, which must detect not only the re-occurring items, but also novel items that are in violation of the platform policy. In this work, we select six typical categories to study: fur, illegal wildlife related, magnetic balls (small enough to be swallowed by children), weapon-grade knives, smoking-drug-related, and single-use plastic. Figure 1 shows our data programming pipeline. When moderating a product category, product moderators first carry out a \"scope\" step that identifies the products related to that category (cf. scoping query). Then, they carry out a \"scan\" step that identifies products that violate the policy. The goal of our study is to investigate the usefulness of this pipeline for quickly generating training data to train a classifier that will support the product moderators in carrying out the \"scan\" step, with a focus on understanding the potential of inspiration sets. Data programming ) is a method that leverages multiple weak supervision signals provided by people who are experts in a domain. The signals take the form of rules, expressed in the form of labeling functions (LFs). Given a training data point, an LF either returns a suggested label (0 for \"appropriate\" or 1 for \"inappropriate\") or abstains, meaning that it assigns no label. In our study, LFs involve the content of product metadata and keywords in the textual descriptions of products, e.g., |IF brand == 'brand123' THEN inappropriate ELSE abstain|. In practice most LFs return only (0, abstain) or (1, abstain). The LFs are applied to the data that was selected in the \"scope\" step (cf. \"Unlabeled data\" in Figure 1 ) to generate a label matrix in which each data point may have multiple, contradictory labels.",
"cite_spans": [],
"ref_spans": [
{
"start": 933,
"end": 941,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 2206,
"end": 2214,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Policy-based Monitoring Categories",
"sec_num": "3.1"
},
{
"text": "In our study, moderators were asked to create rules based on their knowledge of the product categories and their moderation experience. Note that the same moderator was responsible for one category throughout our experiment. They had a limited amount of time (60 min. per category). The time limits in our study were determined in consultation with bol.com's product quality team to simulate real-world settings. This led to an initial set of LFs for each category (number of LFs per category: fur 14, illegal wildlife related 6, magnetic balls 5, weapon-grade knives 5, smoking-drug-related 15, single-use plastic 13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Programming",
"sec_num": "3.2"
},
{
"text": "The label matrix created by the rules is then transformed into labeled data. demonstrate that provided a fixed number LFs, a probabilistic labeling model is able to recover a set of labels and corresponding probabilities that can be used to train a classifier (cf. \"Training data\" and \"Classifier\" in Figure 1 ). Snorkel (Ratner et al., 2017) is the first end-to-end system that applies the data programming paradigm. Our case study builds on Snorkel. (More technical details of our setup are in Appendix A.)",
"cite_spans": [
{
"start": 321,
"end": 342,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 301,
"end": 309,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data Programming",
"sec_num": "3.2"
},
{
"text": "We test three different ways of sampling data points to create the inspiration sets consisting of products (cf. Figure 1, bottom) . These sets are shown to the moderators to allow them to revise the rules. Set 1: Abstain-based strategy Randomly drawn from training data not yet covered by an LF. Set 2: Disagreement-based strategy Randomly drawn from training data on which LFs disagreed. Set 3: Classifier-based strategy Development data points with largest classifier error. Set 1 and Set 2 are loosely based on strategies introduced by Cohen-Wang et al. (2019). These strategies are particularly interesting for a real-world setting because they are unsupervised, meaning that they are based on information included in the label matrix and do not require ground truth or classifier training. Set 3 is a supervised set. It provides product moderators with information about errors that are made by classifiers. This strategy is touched upon, but not implemented, by Cohen-Wang et al. (2019) . Recall that Cohen-Wang et al. (2019) uses a simulated human expert, whereas in our experiment, human domain experts inspect the inspiration sets and revise the rules. We used a sim- ple logistic regression classifier for the supervision of Set 3 (see Appendix A.3 for more details). Each inspiration set contains the number of data points available, up to a maximum of 100. The moderators had a limited amount of time (30 min. per set) to inspect the inspiration sets and add, remove, or change rules in their initial set of rules. Note that in our setting, each inspiration set was drawn once and not updated after the moderator changed one rule.",
"cite_spans": [
{
"start": 968,
"end": 992,
"text": "Cohen-Wang et al. (2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 112,
"end": 129,
"text": "Figure 1, bottom)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Inspiration Sets",
"sec_num": "3.3"
},
{
"text": "We analyze how the inspiration sets impact the quality of our data. Table 1 summarizes the data that we use. The ground truth was created by our domain experts. Table 2 presents our results in terms of data quality. Results are reported using the F 2 measure due to the importance of recall in our use case. Data points whose \"inappropriate\" label is generated as having a probability > 0.5 are considered positive. Note that scores in Table 2 do not directly reflect the ultimate performance of the classifier, which to a certain extent can leverage data with low F 2 scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 161,
"end": 168,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 436,
"end": 443,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Our results suggest two findings that have, to our knowledge, not been previously documented. First, professional content moderators do not necessarily need labeled sample data to write rules for a data programming pipeline, but instead come quite far relying only on domain knowledge and experience (cf. \"initial\" in Table 2 ). Second, when revising their initial set of rules, moderators do not necessarily need an inspiration set created using supervision. Instead, a 30-min. session with an unsupervised inspiration set (Set 1 or Set 2) can improve data quality. The exception is fur where F 2 is already 0.8, and inspiration sets make the data slightly worse. The category knives starts out with extremely low quality data, and inspiration sets do not help much, except for a small, but expensive boost by Set 3, our supervised set. The moderator had only basic experience with this category.",
"cite_spans": [],
"ref_spans": [
{
"start": 318,
"end": 325,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We also found that for most categories, a considerable amount of training data (31-56%) received only abstains (see Appendix B for more details). This observation is consistent with previous work, e.g., that of Cohen-Wang et al. 2019, which has noted that LF sets rarely reach complete coverage. In general, a small number of rules tend to cover a large portion of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "The majority of rules had a low precision, and a small number of rules had high recall. Possible reasons are that product moderators tried not to miss out on inappropriate products, or that they had set of specific data points in mind during LF definition, as suggested by Varma et al. (2017a) . We also noticed that moderators added and changed, but did not delete rules. In fact, we only observed a single case of a rule being deleted. More research is necessary to understand if this reflects high confidence in the initial choices, or a default thinking pattern, as studied by Adams et al. (2021) . Finally, we observe it is important not to assume that each newly added rule yields improvement: rule interactions are also important. A more detailed analysis of the changes brought about by the inspiration sets for two representative cases is included in Appendix C.",
"cite_spans": [
{
"start": 273,
"end": 293,
"text": "Varma et al. (2017a)",
"ref_id": "BIBREF23"
},
{
"start": 581,
"end": 600,
"text": "Adams et al. (2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Our case study has shown our data programming pipeline can generate labeled data for moderation classifiers in a fraction of the time needed for hand labeling (90 min. vs. a week or more of effort). We have seen that moderators can create effective rules based on their domain knowledge and experience, plus a short exposure to an unsupervised inspiration set. Labeling data by hand in order to create supervised inspiration sets may not be worth the effort. Our observations suggest that it is important that moderators not only write rules, but also continue moderating so that they can gain expertise and also be able to update rules quickly in response to changes in the domain, i.e., a new type of offensive clothing items, as in Bryant (2020) .",
"cite_spans": [
{
"start": 735,
"end": 748,
"text": "Bryant (2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "5"
},
{
"text": "We hope that our work will inspire research on data programming in domains in which fast response to inappropriate products or content is needed. Future research could seek to understand the ability of moderators to predict the interaction of rules and why they seem hesitant to discard rules once they have created them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Outlook",
"sec_num": "5"
},
{
"text": "The technical details of the setup we used are as follows: we make use of the official implementation of the Snorkel system. This implementation consolidates work from various publications (Ratner et al., 2017 (Ratner et al., , 2019a even though the repository name is \"snorkel\". We used version 0.9.0 1 . There, the label model is optimized using Stochastic Gradient Descent (SGD) on the matrix-completion formulation as in (Ratner et al., 2019a) as opposed to interleaving SGD and Gibbs sampling in (Ratner et al., 2017) . In general in data programming, the label model needs two inputs: the dependency structure of the LFs and the class balance of the dependent variable (i.e. p(Y )). By default, this implementations assumes the LFs to be conditionally independent and that the class balance is uniformly distributed.",
"cite_spans": [
{
"start": 189,
"end": 209,
"text": "(Ratner et al., 2017",
"ref_id": "BIBREF16"
},
{
"start": 210,
"end": 233,
"text": "(Ratner et al., , 2019a",
"ref_id": "BIBREF19"
},
{
"start": 425,
"end": 447,
"text": "(Ratner et al., 2019a)",
"ref_id": "BIBREF19"
},
{
"start": 501,
"end": 522,
"text": "(Ratner et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Snorkel",
"sec_num": null
},
{
"text": "For each category of inappropriate items, the product moderator that was specialized in that category labeled the development, validation and test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Gold labels",
"sec_num": null
},
{
"text": "For each category of inappropriate items, we trained a binary classifier. In line with the official Snorkel introduction tutorial 2 , we utilized a simple Logistic Regression classifier. We used categorical cross-entropy loss and an Adam optimizer with a learning rate of 0.01. Note that in this work, we use the classifier for selecting the items in the inspiration Set 3. More details on the whole pipeline can be found in (Winkler, 2020) .",
"cite_spans": [
{
"start": 425,
"end": 440,
"text": "(Winkler, 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Classifier",
"sec_num": null
},
{
"text": "In our experiments, inspiration sets inspired the product moderators to adjust their initial set of rules. We translated these rules into LFs in Python. Figure 2 illustrates the impact of the changes to the LFs across all categories of inappropriate items. The leftmost bar of each group represents the coverage of the initial LF sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 162,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "B Properties of the label matrix",
"sec_num": null
},
{
"text": "In general, we notice that inspiration sets have an impact on the coverage of the LFs, but that they fall far short from allowing us to achieve full coverage. We also notice, however, that there is a general trend towards inspiration sets increasing the coverage, reflected by a decrease in the fraction of the data set that is assigned 0 labels. This happened in most categories with Set 1 and Set 3 and in half of the categories with Set 2. The strongest coverage increase happened using Set 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Properties of the label matrix",
"sec_num": null
},
{
"text": "After the adjustments, for most categories, the LFs within each set seemed to be more coordinated with respect to the data points that they labeled. This can be seen in the increase in the percentage of each data set with multiple labels per sample. However, note that overall, most data points that received a label, received a label from only one LF. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Properties of the label matrix",
"sec_num": null
},
{
"text": "In the main paper, we mentioned several observations we made regarding the sets of rules that were created by the professional moderators.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 A small number of rules tend to cover a large portion of the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Moderators added and changed, but did not delete rules (except one rule upon one occasion).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 We cannot not assume that each newly added rule yields improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "We based these observations on characteristics that we computed on the training and validation sets in each category. The statistics of these training and validation sets are provided in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 187,
"end": 194,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "After translating the rules into LFs, we computed the following characteristics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 LF index: a running index of each rule (Labeling Function) in the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Change indicates whether the rules were adjusted (A), newly added (N) or not changed (/) as a result of considering the inspiration set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Polarity: the polarity that the rule assigns to the training set data points. If the value is [0], then the rule either assigned \"appropriate\" or abstained. If the value is [1], then the rule either assigned \"inappropriate\" or abstained. If the value is then the rule always abstained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Coverage: the fraction of the training set data points to which the LF assigned a label (i.e., did not abstain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Overlaps: the fraction of the training set on which the rule assigned a label and at least one other rule did as well (i.e., the rule and at least one other rule did not abstain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 Conflicts: the fraction of the training set on which the labels suggested by multiple rules disagree. : Number of data points in our training and validation sets. These were the data sets on which we computed the LF characteristics. For convenience, we repeat the sizes of the training data here. Note that the validation sets are disjoint from the development and test sets used in the main paper. For these validation sets, the number of points with the positive label, i.e., \"inappropriate\", is in parentheses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "\u2022 % gain in F 2 : the relative improvement in the F 2 score of the labeled data generated by the label model contributed by the individual rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "Note that Polarity, Coverage, Rules, and Overlap are all calculated on the training data set, and \"% gain in F 2 \" is calculated on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "We chose two representative categories that show the variation of the gain, and provide example analyses for each. The category magnetic balls is in Table 3 and single-use plastic is in Table 4 . The analysis uses the rules adjusted after consulting the inspiration Set 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 156,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 186,
"end": 193,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "C Individual rule characteristics",
"sec_num": null
},
{
"text": "https://github.com/snorkel-team/ snorkel/releases/tag/v0.9.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/snorkel-team/ snorkel-tutorials/blob/ 93fc77718b608c5709d4eb8b90b7de7683ba4c15/ spam/01_spam_tutorial.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "People systematically overlook subtractive changes",
"authors": [
{
"first": "Gabrielle",
"middle": [
"S"
],
"last": "Adams",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"A"
],
"last": "Converse",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"H"
],
"last": "Hales",
"suffix": ""
},
{
"first": "Leidy",
"middle": [
"E"
],
"last": "Klotz",
"suffix": ""
}
],
"year": 2021,
"venue": "Nature",
"volume": "592",
"issue": "7853",
"pages": "258--261",
"other_ids": {
"DOI": [
"10.1038/s41586-021-03380-y"
]
},
"num": null,
"urls": [],
"raw_text": "Gabrielle S. Adams, Benjamin A. Converse, Andrew H. Hales, and Leidy E. Klotz. 2021. People sys- tematically overlook subtractive changes. Nature, 592(7853):258-261.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semi-automatic identification of counterfeit offers in online shopping platforms",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Wartner",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Rahm",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Internet Commerce",
"volume": "15",
"issue": "1",
"pages": "59--75",
"other_ids": {
"DOI": [
"10.1080/15332861.2015.1121459"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Arnold, Christian Wartner, and Erhard Rahm. 2016. Semi-automatic identification of counterfeit offers in online shopping platforms. Journal of In- ternet Commerce, 15(1):59-75.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data programming for learning discourse structure",
"authors": [
{
"first": "Sonia",
"middle": [],
"last": "Badene",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Lorr\u00e9",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "640--645",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1061"
]
},
"num": null,
"urls": [],
"raw_text": "Sonia Badene, Kate Thompson, Jean-Pierre Lorr\u00e9, and Nicholas Asher. 2019. Data programming for learn- ing discourse structure. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 640-645.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Coronavirus: Amazon removes overpriced goods and fake cures",
"authors": [],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "BBC. 2020. Coronavirus: Amazon removes over- priced goods and fake cures. 28 February 2020 (Ac- cessed 6 May 2021).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Amazon removes shirts with derogatory slogan about Kamala Harris. The Guardian",
"authors": [
{
"first": "Miranda",
"middle": [],
"last": "Bryant",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miranda Bryant. 2020. Amazon removes shirts with derogatory slogan about Kamala Harris. The Guardian. 19 Aug 2020 (Accessed 6 May 2021).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Medical device surveillance with electronic health records",
"authors": [
{
"first": "Alison",
"middle": [],
"last": "Callahan",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"A"
],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "James",
"middle": [
"I"
],
"last": "Huddleston",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [
"J"
],
"last": "Giori",
"suffix": ""
},
{
"first": "Scott",
"middle": [
"L"
],
"last": "Delp",
"suffix": ""
},
{
"first": "Nigam Haresh",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41746-019-0168-z"
]
},
"num": null,
"urls": [],
"raw_text": "Alison Callahan, Jason A. Fries, Christopher R\u00e9, James I. Huddleston, Nicholas J. Giori, Scott L. Delp, and Nigam Haresh Shah. 2019. Medical de- vice surveillance with electronic health records. npj Digital Medicine, 2(94).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "#thyghgapp: Instagram content moderation and lexical variation in pro-eating disorder communities",
"authors": [
{
"first": "Stevie",
"middle": [],
"last": "Chancellor",
"suffix": ""
},
{
"first": "Jessica",
"middle": [
"Annette"
],
"last": "Pater",
"suffix": ""
},
{
"first": "Trustin",
"middle": [],
"last": "Clear",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Munmun De",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing",
"volume": "",
"issue": "",
"pages": "1201--1213",
"other_ids": {
"DOI": [
"10.1145/2818048.2819963"
]
},
"num": null,
"urls": [],
"raw_text": "Stevie Chancellor, Jessica Annette Pater, Trustin Clear, Eric Gilbert, and Munmun De Choudhury. 2016. #thyghgapp: Instagram content moderation and lex- ical variation in pro-eating disorder communities. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, pages 1201-1213.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Interactive programmatic labeling for weak supervision",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Cohen-Wang",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mussmann",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Workshop on Data Collection, Curation, and Labeling for Mining and Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Cohen-Wang, Stephen Mussmann, Alexan- der Ratner, and Christopher R\u00e9. 2019. Interactive programmatic labeling for weak supervision. In Workshop on Data Collection, Curation, and Label- ing for Mining and Learning.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A weak supervision technique with a generative model for improved gene clustering",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Dutta",
"suffix": ""
},
{
"first": "Sriparna",
"middle": [],
"last": "Saha",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Congress on Evolutionary Computation",
"volume": "",
"issue": "",
"pages": "2521--2528",
"other_ids": {
"DOI": [
"10.1109/CEC.2019.8790052"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Dutta and Sriparna Saha. 2019. A weak supervi- sion technique with a generative model for improved gene clustering. In 2019 IEEE Congress on Evolu- tionary Computation, pages 2521-2528.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A protein interaction information-based generative model for enhancing gene clustering",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Dutta",
"suffix": ""
},
{
"first": "Sriparna",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Sanket",
"middle": [],
"last": "Pai",
"suffix": ""
},
{
"first": "Aviral",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "Scientific Reports",
"volume": "",
"issue": "665",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41598-020-57437-5"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Dutta, Sriparna Saha, Sanket Pai, and Aviral Ku- mar. 2020. A protein interaction information-based generative model for enhancing gene clustering. Sci- entific Reports, 10(665).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Data programming with DDLite: Putting humans in a different part of the loop",
"authors": [
{
"first": "R",
"middle": [],
"last": "Henry",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"A"
],
"last": "Ratner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Human-In-the-Loop Data Analytics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/2939502.2939515"
]
},
"num": null,
"urls": [],
"raw_text": "Henry R. Ehrenberg, Jaeho Shin, Alexander Ratner, Ja- son A. Fries, and Christopher R\u00e9. 2016. Data pro- gramming with DDLite: Putting humans in a differ- ent part of the loop. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences",
"authors": [
{
"first": "Jason",
"middle": [
"A"
],
"last": "Fries",
"suffix": ""
},
{
"first": "Paroma",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"S"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Heliodoro",
"middle": [],
"last": "Tejeda",
"suffix": ""
},
{
"first": "Priyanka",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Chubb",
"suffix": ""
},
{
"first": "Shiraz",
"middle": [],
"last": "Maskatia",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Fiterau",
"suffix": ""
}
],
"year": 2019,
"venue": "Nature Communications",
"volume": "",
"issue": "3111",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41467-019-11012-3"
]
},
"num": null,
"urls": [],
"raw_text": "Jason A. Fries, Paroma Varma, Vincent S. Chen, Ke Xiao, Heliodoro Tejeda, Priyanka Saha, Jared Dunnmon, Henry Chubb, Shiraz Maskatia, Madalina Fiterau, et al. 2019. Weakly supervised classification of aortic valve malformations us- ing unlabeled cardiac MRI sequences. Nature Communications, 10(3111).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Detection of illicit online sales of fentanyls via Twitter",
"authors": [
{
"first": "Tim",
"middle": [
"K"
],
"last": "Mackey",
"suffix": ""
},
{
"first": "Janani",
"middle": [],
"last": "Kalyanam",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.12688/f1000research.12914.1"
]
},
"num": null,
"urls": [],
"raw_text": "Tim K. Mackey and Janani Kalyanam. 2017. Detec- tion of illicit online sales of fentanyls via Twitter. F1000Research, 6:1937.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Trade in wild-sourced African grey parrots: Insights via social media",
"authors": [
{
"first": "O",
"middle": [],
"last": "Rowan",
"suffix": ""
},
{
"first": "Cristiana",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Neil",
"middle": [
"C"
],
"last": "Senni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "D'cruze",
"suffix": ""
}
],
"year": 2018,
"venue": "Global Ecology and Conservation",
"volume": "15",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1016/j.gecco.2018.e00429"
]
},
"num": null,
"urls": [],
"raw_text": "Rowan O. Martin, Cristiana Senni, and Neil C. D'Cruze. 2018. Trade in wild-sourced African grey parrots: Insights via social media. Global Ecology and Conservation, 15:e00429.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WeSAL: Applying active supervision to find high-quality labels at industrial scale",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Nashaat",
"suffix": ""
},
{
"first": "Aindrila",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Shaikh",
"middle": [],
"last": "Quader",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 53rd Hawaii International Conference on System Sciences",
"volume": "",
"issue": "",
"pages": "219--228",
"other_ids": {
"DOI": [
"10.24251/HICSS.2020.028"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Nashaat, Aindrila Ghosh, James Miller, and Shaikh Quader. 2020. WeSAL: Applying active supervision to find high-quality labels at industrial scale. In Proceedings of the 53rd Hawaii Interna- tional Conference on System Sciences, pages 219- 228.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hybridization of active learning and data programming for labeling large industrial datasets",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Nashaat",
"suffix": ""
},
{
"first": "Aindrila",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Shaikh",
"middle": [],
"last": "Quader",
"suffix": ""
},
{
"first": "Chad",
"middle": [],
"last": "Marston",
"suffix": ""
},
{
"first": "Jean-Francois",
"middle": [],
"last": "Puget",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Big Data",
"volume": "",
"issue": "",
"pages": "46--55",
"other_ids": {
"DOI": [
"10.1109/BigData.2018.8622459"
]
},
"num": null,
"urls": [],
"raw_text": "Mona Nashaat, Aindrila Ghosh, James Miller, Shaikh Quader, Chad Marston, and Jean-Francois Puget. 2018. Hybridization of active learning and data pro- gramming for labeling large industrial datasets. In 2018 IEEE International Conference on Big Data, pages 46-55.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Snorkel: Rapid training data creation with weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Bach",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ehrenberg",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "Fries",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Very Large Data Bases Endowment",
"volume": "11",
"issue": "",
"pages": "269--282",
"other_ids": {
"DOI": [
"10.14778/3157794.3157797"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Stephen H. Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher R\u00e9. 2017. Snorkel: Rapid training data creation with weak su- pervision. In Proceedings of the Very Large Data Bases Endowment, volume 11, pages 269-282.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Data programming: Creating large training sets, quickly",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Sen",
"middle": [],
"last": "De Sa",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Selsam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "3567--3575",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/pdf/10.5555/3157382.3157497"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Christopher M. De Sa, Sen Wu, Daniel Selsam, and Christopher R\u00e9. 2016. Data pro- gramming: Creating large training sets, quickly. In Advances in Neural Information Processing Systems, volume 29, pages 3567-3575.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Snorkel metal: Weak supervision for multi-task learning",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Goldman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3209889.3209898"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Roger Goldman, and Christopher R\u00e9. 2018. Snorkel metal: Weak supervision for multi-task learning. In Proceedings of the Second Workshop on Data Man- agement for End-To-End Machine Learning.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Training complex models with multi-task weak supervision",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Sala",
"suffix": ""
},
{
"first": "Shreyash",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "4763--4771",
"other_ids": {
"DOI": [
"10.1609/aaai.v33i01.33014763"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher R\u00e9. 2019a. Training complex models with multi-task weak supervision. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 4763-4771.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The role of massively multi-task and weak supervision in software 2.0",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Conference on Innovative Data Systems Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Ratner, Braden Hancock, and Christopher R\u00e9. 2019b. The role of massively multi-task and weak supervision in software 2.0. In Proceedings of the Conference on Innovative Data Systems Re- search.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Doubly weak supervision of deep learning models for Head CT",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Saab",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Goldman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Ratner",
"suffix": ""
},
{
"first": "Hersh",
"middle": [],
"last": "Sagreiya",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Medical Image Computing and Computer-Assisted Intervention",
"volume": "",
"issue": "",
"pages": "811--819",
"other_ids": {
"DOI": [
"10.1007/978-3-030-32248-9_90"
]
},
"num": null,
"urls": [],
"raw_text": "Khaled Saab, Jared Dunnmon, Roger Goldman, Alexander Ratner, Hersh Sagreiya, Christopher R\u00e9, and Daniel Rubin. 2019. Doubly weak supervision of deep learning models for Head CT. In Interna- tional Conference on Medical Image Computing and Computer-Assisted Intervention, pages 811-819.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Weak supervision as an efficient approach for automated seizure detection in electroencephalography",
"authors": [
{
"first": "Khaled",
"middle": [],
"last": "Kamal Saab",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Dunnmon",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"L"
],
"last": "Rubin",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Lee-Messer",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/s41746-020-0264-0"
]
},
"num": null,
"urls": [],
"raw_text": "Khaled Kamal Saab, Jared Dunnmon, Christopher R\u00e9, Daniel L. Rubin, and Christopher Lee-Messer. 2020. Weak supervision as an efficient approach for auto- mated seizure detection in electroencephalography. npj Digital Medicine, 3(59).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Socratic learning: Correcting misspecified generative models using discriminative models",
"authors": [
{
"first": "Paroma",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Iter",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Rose",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.08123"
]
},
"num": null,
"urls": [],
"raw_text": "Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa, and Christopher R\u00e9. 2017a. So- cratic learning: Correcting misspecified generative models using discriminative models. arXiv preprint arXiv:1610.08123.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Flipper: A systematic approach to debugging training sets",
"authors": [
{
"first": "Paroma",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Iter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"De"
],
"last": "Sa",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3077257.3077263"
]
},
"num": null,
"urls": [],
"raw_text": "Paroma Varma, Dan Iter, Christopher De Sa, and Christopher R\u00e9. 2017b. Flipper: A systematic ap- proach to debugging training sets. In Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Snorkeling for beginners: Applying data programming to product moderation in e-commerce",
"authors": [
{
"first": "Justine",
"middle": [],
"last": "Winkler",
"suffix": ""
}
],
"year": 2020,
"venue": "Master's thesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justine Winkler. 2020. Snorkeling for beginners: Ap- plying data programming to product moderation in e-commerce. Master's thesis, Radboud University, Nijmegen, Netherlands.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Fonduer: Knowledge base construction from richly formatted data",
"authors": [
{
"first": "Sen",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Hsiao",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Braden",
"middle": [],
"last": "Hancock",
"suffix": ""
},
{
"first": "Theodoros",
"middle": [],
"last": "Rekatsinas",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Levis",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "R\u00e9",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "1301--1316",
"other_ids": {
"DOI": [
"10.1145/3183713.3183729"
]
},
"num": null,
"urls": [],
"raw_text": "Sen Wu, Luke Hsiao, Xiao Cheng, Braden Hancock, Theodoros Rekatsinas, Philip Levis, and Christopher R\u00e9. 2018. Fonduer: Knowledge base construction from richly formatted data. In Proceedings of the 2018 International Conference on Management of Data, pages 1301-1316.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Use of machine learning to detect wildlife product promotion and sales on Twitter",
"authors": [
{
"first": "Qing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Mingxiang",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Tim",
"middle": [
"K"
],
"last": "Mackey",
"suffix": ""
}
],
"year": 2019,
"venue": "Frontiers Big Data",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3389/fdata.2019.00028"
]
},
"num": null,
"urls": [],
"raw_text": "Qing Xu, Jiawei Li, Mingxiang Cai, and Tim K. Mackey. 2019. Use of machine learning to de- tect wildlife product promotion and sales on Twitter. Frontiers Big Data, 2:28.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Top row: Our data programming pipeline. Bottom row (red box): Inspiration sets used for fast control.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "This figure shows the sizes of training data set fractions that received a certain number of labels per sample. Results are shown for the all versions (initial or adjusted using an inspiration set: Set 1, Set 2 or Set 3) of each monitor.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "Number of data points in our data sets. For sets with ground truth, the number of points with the positive label, i.e., \"inappropriate\", is in parentheses.",
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table/>",
"html": null,
"text": "Data quality results: Label model performance (F 2 measure) on the test set.",
"type_str": "table",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"text": "This table contains the characteristics of the individual LFs for magnetic balls after they have been adjusted with the inspiration Set 1.",
"type_str": "table",
"num": null
},
"TABREF7": {
"content": "<table><tr><td>category</td><td colspan=\"2\">training validation</td></tr><tr><td/><td>set</td><td>set</td></tr><tr><td>fur</td><td>7633</td><td>400 (55)</td></tr><tr><td colspan=\"2\">illegal wildlife related 7426</td><td>318 (10)</td></tr><tr><td>magnetic balls</td><td>2316</td><td>324 (7)</td></tr><tr><td colspan=\"2\">weapon-grade knives 1266</td><td>210 (18)</td></tr><tr><td colspan=\"2\">smoking-drug-related 1071</td><td>173 (12)</td></tr><tr><td>single-use plastic</td><td>7364</td><td>445 (118)</td></tr></table>",
"html": null,
"text": "This table contains the characteristics of the individual LFs for single-use plastic after they have been adjusted with the inspiration Set 1.",
"type_str": "table",
"num": null
},
"TABREF8": {
"content": "<table/>",
"html": null,
"text": "",
"type_str": "table",
"num": null
}
}
}
}