ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-demos.28.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:41:25.965900Z"
},
"title": "Paladin: an annotation tool based on active and proactive learning",
"authors": [
{
"first": "Minh-Quoc",
"middle": [],
"last": "Nghiem",
"suffix": "",
"affiliation": {
"laboratory": "National Centre for Text Mining",
"institution": "The University of Manchester",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
},
{
"first": "Paul",
"middle": [],
"last": "Baylis",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": "",
"affiliation": {
"laboratory": "National Centre for Text Mining",
"institution": "The University of Manchester",
"location": {
"country": "United Kingdom"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present Paladin, an opensource web-based annotation tool for creating high-quality multi-label document-level datasets. By integrating active learning and proactive learning to the annotation task, Paladin makes the task less time-consuming and requiring less human effort. Although Paladin is designed for multi-label settings, the system is flexible and can be adapted to other tasks in single-label settings.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present Paladin, an opensource web-based annotation tool for creating high-quality multi-label document-level datasets. By integrating active learning and proactive learning to the annotation task, Paladin makes the task less time-consuming and requiring less human effort. Although Paladin is designed for multi-label settings, the system is flexible and can be adapted to other tasks in single-label settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Labelled data is essential in many NLP tasks based on Machine Learning. Manually annotating such data is time-consuming, and require a lot of human effort. Active learning has been used to ease this process by choosing the data points for annotation instead of annotating all instances of the unlabeled data (Settles, 2009) . Some recent research has also utilized proactive learning, in which the system is allowed to assign specific unlabeled instances to specific annotators (Li et al., 2019) . The annotators, in these scenarios, only have to annotate a small set of representative and informative data which they can provide reliable labels. It helps reduce the labelling effort and at the same time makes the best use of available annotators.",
"cite_spans": [
{
"start": 308,
"end": 323,
"text": "(Settles, 2009)",
"ref_id": "BIBREF14"
},
{
"start": 478,
"end": 495,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To date, there are many tools available for active learning, such as the TexNLP (Baldridge and Palmer, 2009) , the Active-Learning-Scala (Santos and Carvalho, 2014) , the JCLAL (Reyes et al., 2016) , the LibAct (Yang et al., 2017) libraries, the Vowpal Wabbit 1 . These tools, however, focus only on the active learning algorithms and provide no user interface thus making it difficult to use for the end-users. On the other hand, several tools have been made with user-friendly interface such 1 http://hunch.net/\u02dcvw/ as BRAT (Stenetorp et al., 2012) , WebAnno (Yimam et al., 2013) , PubAnnotation (Kim and Wang, 2012) , doccano 2 . Some of the tools offer active/proactive learning such as APLenty (Nghiem and Ananiadou, 2018) , DUALIST (Settles and Zhu, 2012) , AlpacaTag (Lin et al., 2019) , Discrete Active Learning Coref (Li et al., 2020a) . Currently, these tools support sequence labelling/coreference resolution tasks but not document classification tasks. To the best of our knowledge, there is no such tool for document classification which supports active/proactive learning. Prodigy 3 supports active learning for both sequence labelling and document classification tasks but it is a commercial product.",
"cite_spans": [
{
"start": 80,
"end": 108,
"text": "(Baldridge and Palmer, 2009)",
"ref_id": "BIBREF0"
},
{
"start": 137,
"end": 164,
"text": "(Santos and Carvalho, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 177,
"end": 197,
"text": "(Reyes et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 211,
"end": 230,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 526,
"end": 550,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 561,
"end": 581,
"text": "(Yimam et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 598,
"end": 618,
"text": "(Kim and Wang, 2012)",
"ref_id": "BIBREF5"
},
{
"start": 699,
"end": 727,
"text": "(Nghiem and Ananiadou, 2018)",
"ref_id": "BIBREF11"
},
{
"start": 738,
"end": 761,
"text": "(Settles and Zhu, 2012)",
"ref_id": "BIBREF15"
},
{
"start": 764,
"end": 792,
"text": "AlpacaTag (Lin et al., 2019)",
"ref_id": null
},
{
"start": 826,
"end": 844,
"text": "(Li et al., 2020a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To compensate for the lack of available document-level annotation tool, we develop Paladin (Proactive learning annotator for document instances), an open-source web-based system for creating labelled data using active/proactive learning 4 . The main innovation of Paladin is the combination of a user-friendly annotation tool with active/proactive learning. Specifically:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Active/proactive learning integration: Paladin makes annotation easy, time-efficient, and require less human effort by offering active and proactive learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. An easy-to-use interface for annotators: Paladin adapts the interface of doccano, making annotation intuitive and easy to use.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. Suitable for multi-label document annotation tasks: Paladin is best used for multi-label document annotation tasks, although it can be used for other single-label classification problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows. Section 2 presents details of Paladin. Section 3 describes a case study of using Paladin for a multi-label document annotation task. Section 4 concludes the paper and points to avenues for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Paladin is a web-based tool implemented in Python using Django web framework and Vue.js. The main user interface consists of a project management page and an annotation page. Below, this section describes Paladin in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Descriptions",
"sec_num": "2"
},
{
"text": "In Paladin, there are two main types of user role: the project manager role and the annotator role. A project manager can create/customise annotation projects and add annotators to the projects. The annotators can annotate text assigned to them. The interface allows the project manager to: 1. create a project 2. define the tagset 3. upload the seeding and unlabelled data to the webserver 4. assign annotators to a project 5. choose the active/proactive learning strategy. The project manager can additionally set how the batch is allocated, the sampling and proficiency thresholds, the steps before retraining and samples per session as illustrated in Figure 1 . When creating a new annotation project, the project manager needs to upload two datasets (in Tab Separated Values format) to the server. The first dataset is the seeding dataset, which will be used by the system to train the classifier and estimate the annotators' proficiency. The second dataset is the unlabelled dataset, on which the system chooses the text to assign to the annotators. If there is no seeding data, the system will select random text from the unlabelled dataset for annotation in the first batch. Figure 2 shows the text when successfully uploaded to the system. ",
"cite_spans": [],
"ref_spans": [
{
"start": 655,
"end": 663,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1183,
"end": 1191,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Project management",
"sec_num": "2.1"
},
{
"text": "For annotation and visualization of annotated documents, we adapted the doccano annotation interface. The annotation interface displays a set of documents that are assigned to the annotator, one at a time as illustrated in Figure 3 . The annotator can navigate to next or previous documents during annotation using the \"Prev\" or \"Next\" buttons. When working on Paladin, the annotator uses the mouse or keyboard shortcut to select label(s) for the current document. When finishing the assigned documents, the annotator can click on \"Finish Annotation\". The system will validate the annotated documents, retrain the classifier, and assign new documents to the annotator. Each annotator can only see the documents assigned to him/her in the current batch.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 231,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotation interface",
"sec_num": "2.2"
},
{
"text": "Depending on the project manager's settings, the system chooses different document instances to send to the annotators. The project manager can choose to prioritise the most informative instances for the classifier or to maintain the balance between the number of instances in each class. With the first option, the system prioritises the most informative documents, regardless of the class. Paladin currently employs the least confidence uncertainty-based strategy (Culotta and McCallum, 2005) based on the classification outputs from a Transformer model (Devlin et al., 2019) . A linear model is added to the embedding output to predict the score for the labels. Previous research has established that active learning can increase the performance of Transformer-based text classifiers (Grie\u00dfhaber et al., 2020) . With the second option, the system uses the same classification outputs but unlabelled instances are taken from each class in equal amounts. The default option in Paladin is the second one. This setting aims to minimise the unbalanced data problems where we have unequal instances for different classes.",
"cite_spans": [
{
"start": 466,
"end": 494,
"text": "(Culotta and McCallum, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 556,
"end": 577,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 787,
"end": 812,
"text": "(Grie\u00dfhaber et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Active learning",
"sec_num": "2.3"
},
{
"text": "Paladin uses pool-based sampling scenario, where the data samples are chosen for labeling from the unlabeled dataset. The project manager, however, can upload additional unlabeled data to an existing annotation project at anytime.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Active learning",
"sec_num": "2.3"
},
{
"text": "In many annotation tasks, we assume that the annotators are experts who always provide correct annotations. But in reality, different annotators have different levels of expertise in different domains. It has been demonstrated that proactive learning is helpful for task allocation in crowdsourcing setting where the level of expertise varies from annotator to annotator (Donmez and Carbonell, 2010; Li et al., 2017 Li et al., , 2019 Li et al., , 2020b . Proactive learning is useful in modelling the annotator reliability which can be used to assign the unlabelled instances to the best possible annotators.",
"cite_spans": [
{
"start": 371,
"end": 399,
"text": "(Donmez and Carbonell, 2010;",
"ref_id": "BIBREF3"
},
{
"start": 400,
"end": 415,
"text": "Li et al., 2017",
"ref_id": "BIBREF8"
},
{
"start": 416,
"end": 433,
"text": "Li et al., , 2019",
"ref_id": "BIBREF7"
},
{
"start": 434,
"end": 452,
"text": "Li et al., , 2020b",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proactive learning",
"sec_num": "2.4"
},
{
"text": "Before any annotation, Paladin estimates the proficiency of the annotators for each class by assigning the documents in the seed dataset to all annotators. When the annotators finish labelling these seed documents, the system calculates the likelihood that a particular annotator provides a correct label for a particular label. Then, when assigning new documents to the annotators, Paladin will assign the documents to the best possible annotators by combining the predicted label(s) and the likelihood that the annotator provides a correct label for a particular label. The system will update the estimation after every annotation batch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proactive learning",
"sec_num": "2.4"
},
{
"text": "The typical use cases of Paladin are as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use cases",
"sec_num": "3"
},
{
"text": "1. A user wishes to add more data to an existing dataset to improve model performance: the user can use the existing labelled dataset as the seed to train the initial model, the labels will be automatically extracted from the labelled dataset. The model will select instances from the unlabelled dataset and then distribute them to the annotators for annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use cases",
"sec_num": "3"
},
{
"text": "A user wishes to create a labelled dataset from scratch: the user needs to provide the tag set and the unlabelled data. The first iteration will select unlabelled instances for annotation randomly. After the first iteration, the process is the same as the previous use case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "3. A user wishes to add more data to an existing unbalanced dataset: the user can choose \"maintain class balance\" option in Settings. With this option, the model will try to select more data from the potential minority classes for annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "We used the Toxic Comment Classification Challenge dataset 5 for this experiment. The dataset contains Wikipedia comments which have been manually labelled for toxic behaviour. There are six classes: toxic, severe toxic, obscene, threat, insult, and identity hate. In the experiment, we used 60 comments as the initial training data (seed), 600 comments as test data, and 18,000 for unlabelled data. The instances forming the seed and test data are randomly taken from the original data but we make sure that each class has at least 10 instances and 100 instances in the seed and test data respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simulated Annotators",
"sec_num": "4.1"
},
{
"text": "We compare three settings in this case study. The first one is Random Sampling: the system randomly chooses the next documents for annotation. The second one is Active Learning: the system uses the output of the trained model to assign new documents to an expert (annotator who always provide correct labels). The third one is Proactive Learning: same as Active Learning, but we have two annotators, one expert, and one fallible annotator (annotator who makes mistakes with a probability of 0.1). Figure 4 shows the F1 scores on the test set. In all cases, active/proactive learning setting outperformed Random Sampling setting. ",
"cite_spans": [],
"ref_spans": [
{
"start": 497,
"end": 505,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simulated Annotators",
"sec_num": "4.1"
},
{
"text": "For this experiment, we worked with a consumer law firm analysing 6,880 emails. Each email can have one or more labels from a predefined list which consist of 15 labels. Some examples are \"update query\", \"payment query\", and \"fee query\". Given an email, the annotator had to annotate all labels that are applicable to that email. There are a total of 2,000 emails which were already annotated. This dataset is an unbalanced dataset where nearly two-thirds of the emails belong to the 5 most common labels while less than 7 percent of the emails come from the 5 least common labels. In the experiment, we used 1,000 emails as the initial training data, 1,000 emails as test data, and the rest (4,880) as unlabelled data. The purpose of the experiment was to investigate the performance of Paladin with an unbalanced seed dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Real-World Annotators",
"sec_num": "4.2"
},
{
"text": "Using Paladin, we created an annotation project with four annotators and in each annotation session, an annotator must annotate 20 emails. All annotators are members of the law firm with legal background. We used \"maintain class balance\" and \"best annotators first\" for active learning strategy and proactive learning strategy respectively. We stopped when a total of 1,000 emails were annotated. Figure 5 shows the F1 scores and the stacking percentages of label instance count. The results showed that the F1 score and percentage of minority classes were gradually increased after each annotation batch. We used an Intel Core i9 9820X Linux server with 64GB RAM and a Titan RTX GPU. When allocating a new annotation batch (retraining the model, predicting the unlabelled instances, selecting new instances for annotation), Paladin runs consistently at the rate of around 0.01 to 0.02 seconds per document and it takes less than two minutes to get results. The average level of satisfaction (with ratings from 1 to 5 of three aspects: responsiveness, easy to annotate, easy to navigate) of the annotators with the annotation tool is 4.5/5.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 405,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Real-World Annotators",
"sec_num": "4.2"
},
{
"text": "We introduced Paladin, a web-based open environment for constructing multi-label document-level datasets using active and proactive learning. Paladin can support the quick development of highquality labelled data needed to train and evaluate NLP tools for different applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Considerably more work will need to be done to further enhance Paladin to work with other active/proactive learning algorithms. Besides that, a natural progression of this work is to evaluate Paladin in a large scale annotation project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/doccano 3 https://prodi.gy/ 4 The source code is publicly available at https:// github.com/bluenqm/Paladin",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.kaggle.com/c/ jigsaw-toxic-comment-classification-challenge/ data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been carried out with funding from KTP11612. We would like to thank the anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How well does active learning actually work?: Time-based evaluation of cost-reduction strategies for language documentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "296--305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Baldridge and Alexis Palmer. 2009. How well does active learning actually work?: Time-based evaluation of cost-reduction strategies for language documentation. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 296-305. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reducing labeling effort for structured prediction tasks",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2005,
"venue": "AAAI",
"volume": "5",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta and Andrew McCallum. 2005. Reduc- ing labeling effort for structured prediction tasks. In AAAI, volume 5, pages 746-751.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From active to proactive learning methods",
"authors": [
{
"first": "Pinar",
"middle": [],
"last": "Donmez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jaime",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Machine Learning I",
"volume": "",
"issue": "",
"pages": "97--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pinar Donmez and Jaime G Carbonell. 2010. From ac- tive to proactive learning methods. In Advances in Machine Learning I, pages 97-120. Springer.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fine-tuning BERT for low-resource natural language understanding via active learning",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Grie\u00dfhaber",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Maucher",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1158--1171",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.100"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Grie\u00dfhaber, Johannes Maucher, and Ngoc Thang Vu. 2020. Fine-tuning BERT for low-resource natural language understanding via active learning. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 1158-1171, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Pubannotation: a persistent and sharable corpus and annotation repository",
"authors": [
{
"first": "Jin-Dong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Workshop on Biomedical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "202--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin-Dong Kim and Yue Wang. 2012. Pubannota- tion: a persistent and sharable corpus and annota- tion repository. In Proceedings of the 2012 Work- shop on Biomedical Natural Language Processing, pages 202-205. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Active learning for coreference resolution using discrete annotation",
"authors": [
{
"first": "Belinda",
"middle": [
"Z"
],
"last": "Li",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Stanovsky",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8320--8331",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.738"
]
},
"num": null,
"urls": [],
"raw_text": "Belinda Z. Li, Gabriel Stanovsky, and Luke Zettle- moyer. 2020a. Active learning for coreference res- olution using discrete annotation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8320-8331, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Modelling instancelevel annotator reliability for natural language labelling tasks",
"authors": [
{
"first": "Maolin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Arvid",
"middle": [],
"last": "Fahlstr\u00f6m Myrman",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2873--2883",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1295"
]
},
"num": null,
"urls": [],
"raw_text": "Maolin Li, Arvid Fahlstr\u00f6m Myrman, Tingting Mu, and Sophia Ananiadou. 2019. Modelling instance- level annotator reliability for natural language la- belling tasks. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 2873-2883, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proactive learning for named entity recognition",
"authors": [
{
"first": "Maolin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nhung",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "117--125",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2314"
]
},
"num": null,
"urls": [],
"raw_text": "Maolin Li, Nhung Nguyen, and Sophia Ananiadou. 2017. Proactive learning for named entity recogni- tion. In BioNLP 2017, pages 117-125, Vancouver, Canada,. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A neural model for aggregating coreference annotation in crowdsourcing",
"authors": [
{
"first": "Maolin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5760--5773",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.507"
]
},
"num": null,
"urls": [],
"raw_text": "Maolin Li, Hiroya Takamura, and Sophia Ananiadou. 2020b. A neural model for aggregating corefer- ence annotation in crowdsourcing. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 5760-5773, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "AlpacaTag: An active learning-based crowd annotation framework for sequence tagging",
"authors": [
{
"first": "Dong-Ho",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Frank",
"middle": [
"F"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Ouyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "58--63",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3010"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Dong-Ho Lee, Frank F. Xu, Ouyu Lan, and Xiang Ren. 2019. AlpacaTag: An active learning-based crowd annotation framework for se- quence tagging. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 58-63, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "APLenty: annotation tool for creating high-quality datasets using active and proactive learning",
"authors": [
{
"first": "Minh-Quoc",
"middle": [],
"last": "Nghiem",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "108--113",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2019"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Quoc Nghiem and Sophia Ananiadou. 2018. APLenty: annotation tool for creating high-quality datasets using active and proactive learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 108-113, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "JCLAL: a Java framework for active learning",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Eduardo",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
},
{
"first": "Mar\u00eda",
"middle": [],
"last": "Del Carmen Rodr\u00edguez-Hern\u00e1ndez",
"suffix": ""
},
{
"first": "Sebasti\u00e1n",
"middle": [],
"last": "Habib M Fardoun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ventura",
"suffix": ""
}
],
"year": 2016,
"venue": "The Journal of Machine Learning Research",
"volume": "17",
"issue": "1",
"pages": "3271--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar Reyes, Eduardo P\u00e9rez, Mar\u00eda Del Carmen Rodr\u00edguez-Hern\u00e1ndez, Habib M Fardoun, and Se- basti\u00e1n Ventura. 2016. JCLAL: a Java framework for active learning. The Journal of Machine Learn- ing Research, 17(1):3271-3275.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Comparison of active learning strategies and proposal of a multiclass hypothesis space search",
"authors": [
{
"first": "P",
"middle": [],
"last": "Davi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Cplf",
"middle": [],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Carvalho",
"suffix": ""
}
],
"year": 2014,
"venue": "Hybrid Artificial Intelligence Systems",
"volume": "",
"issue": "",
"pages": "618--629",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davi P Santos and Andr\u00e9 CPLF Carvalho. 2014. Com- parison of active learning strategies and proposal of a multiclass hypothesis space search. In Hybrid Arti- ficial Intelligence Systems, pages 618-629. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Active learning literature survey",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles. 2009. Active learning literature survey. Computer Sciences Technical Report 1648, Univer- sity of Wisconsin-Madison.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Behavioral factors in interactive training of text classifiers",
"authors": [
{
"first": "Burr",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "563--567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Burr Settles and Xiaojin Zhu. 2012. Behavioral fac- tors in interactive training of text classifiers. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 563-567. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BRAT: a web-based tool for NLP-assisted text annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstra- tions at the 13th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 102-107. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "libact: Pool-based active learning in Python",
"authors": [
{
"first": "Yao-Yuan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shao-Chuan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Yu-An",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Tung-En",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Si-An",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hsuan-Tien",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao-Yuan Yang, Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen, and Hsuan-Tien Lin. 2017. libact: Pool-based active learning in Python. Technical report, National Taiwan University. Avail- able as arXiv preprint https://arxiv.org/abs/ 1710.00379.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WebAnno: A flexible, web-based and visually supported system for distributed annotations",
"authors": [
{
"first": "Iryna",
"middle": [],
"last": "Seid Muhie Yimam",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Eckart De Castilho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Biemann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seid Muhie Yimam, Iryna Gurevych, Richard Eckart de Castilho, and Chris Biemann. 2013. WebAnno: A flexible, web-based and visually supported system for distributed annotations. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics: System Demonstrations, pages 1-6.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Figure 1: Project Settings",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Dataset/Seed Dataset",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Annotation interface. The displayed sentence was taken from the Sentiment140 dataset. All labels are shown in the blue rectangle box with the shortcut keys next to them. Annotated labels are shown above the sentence.",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Figure 4: Learning curve",
"num": null,
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"text": "F1 scores and percentages of label instance count. We grouped 5 labels together for readability.",
"num": null,
"uris": null
}
}
}
}