ACL-OCL / Base_JSON /prefixB /json /bppf /2021.bppf-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:11.352057Z"
},
"title": "We Need to Consider Disagreement in Evaluation",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turin",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Michael",
"middle": [],
"last": "Fell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Turin",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University n Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University n Queen Mary University of London",
"location": {}
},
"email": ""
},
{
"first": "Silviu",
"middle": [],
"last": "Paun",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IT University of Copenhagen",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Evaluation is of paramount importance in datadriven research fields such as Natural Language Processing (NLP) and Computer Vision (CV). But current evaluation practice in NLP, except for end-to-end tasks such as machine translation, spoken dialogue systems, or NLG, largely hinges on the existence of a single \"ground truth\" against which we can meaningfully compare the prediction of a model. However, this assumption is flawed for two reasons. 1) In many cases, more than one answer is correct. 2) Even where there is a single answer, disagreement among annotators is ubiquitous, making it difficult to decide on a gold standard. We discuss three sources of disagreement: from the annotator, the data, and the context, and show how this affects even seemingly objective tasks. Current methods of adjudication, agreement, and evaluation ought to be reconsidered at the light of this evidence. Some researchers now propose to address this issue by minimizing disagreement, creating cleaner datasets. We argue that such a simplification is likely to result in oversimplified models just as much as it would do for end-to-end tasks such as machine translation. Instead, we suggest that we need to improve today's evaluation practice to better capture such disagreement. Datasets with multiple annotations are becoming more common, as are methods to integrate disagreement into modeling. The logical next step is to extend this to evaluation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Evaluation is of paramount importance in datadriven research fields such as Natural Language Processing (NLP) and Computer Vision (CV). But current evaluation practice in NLP, except for end-to-end tasks such as machine translation, spoken dialogue systems, or NLG, largely hinges on the existence of a single \"ground truth\" against which we can meaningfully compare the prediction of a model. However, this assumption is flawed for two reasons. 1) In many cases, more than one answer is correct. 2) Even where there is a single answer, disagreement among annotators is ubiquitous, making it difficult to decide on a gold standard. We discuss three sources of disagreement: from the annotator, the data, and the context, and show how this affects even seemingly objective tasks. Current methods of adjudication, agreement, and evaluation ought to be reconsidered at the light of this evidence. Some researchers now propose to address this issue by minimizing disagreement, creating cleaner datasets. We argue that such a simplification is likely to result in oversimplified models just as much as it would do for end-to-end tasks such as machine translation. Instead, we suggest that we need to improve today's evaluation practice to better capture such disagreement. Datasets with multiple annotations are becoming more common, as are methods to integrate disagreement into modeling. The logical next step is to extend this to evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Evaluation is of paramount importance to Natural Language Processing (NLP) and Computer Vision (CV). Automatic evaluation is the primary mechanism to drive and measure progress due to its simplicity and efficiency (Resnik and Lin, 2010; Church and Hestness, 2019) . However, Figure 1 : What is the ground truth? Examples from VQA v2 (Goyal et al., 2017) and (Gimpel et al., 2011 ). today's evaluation practice for virtually all NLP tasks concerned with a fundamental aspect of language interpretation-POS tagging, word sense disambiguation, named entity recognition, coreference, relation extraction, natural language inference, or sentiment analysis-is seriously flawed: the candidate hypotheses of a system (i.e., its predictions) are compared against an evaluation set that is assumed to encode a \"ground truth\" for the modeling task. Yet this evaluation model is outdated and needs reconsideration. The notion of a single correct answer ignores the subjectivity and complexity of many tasks, and focuses on \"easy\", low-risk evaluation, holding back progress in the field. We discuss three sources of disagreement: from the annotator, the data, and the context.",
"cite_spans": [
{
"start": 214,
"end": 236,
"text": "(Resnik and Lin, 2010;",
"ref_id": "BIBREF35"
},
{
"start": 237,
"end": 263,
"text": "Church and Hestness, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 333,
"end": 353,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 358,
"end": 378,
"text": "(Gimpel et al., 2011",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The underlying assumption of the current approach is that the evaluation set represents the best possible approximation of the truth about a given phenomenon, or at least a reasonable one. This ground truth is usually obtained by developing an annotation scheme for the task aiming to achieve the highest possible agreement between human annotators (Artstein and Poesio, 2008) . Disagreements between annotators are either reconciled by hand or aggregated (particularly in the case of crowdsourced annotations) to extract the most likely or agreed-upon choices (Hovy et al., 2013; Passonneau and Carpenter, 2013; Paun et al., 2018) . This aggregated data is referred to as \"gold standard\" (see Ide and Pustejovsky (2017) for an in-depth analysis of annotation methodology).",
"cite_spans": [
{
"start": 349,
"end": 376,
"text": "(Artstein and Poesio, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 561,
"end": 580,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 581,
"end": 612,
"text": "Passonneau and Carpenter, 2013;",
"ref_id": "BIBREF25"
},
{
"start": 613,
"end": 631,
"text": "Paun et al., 2018)",
"ref_id": "BIBREF26"
},
{
"start": 694,
"end": 720,
"text": "Ide and Pustejovsky (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, there is plenty of evidence that gold labels are an idealization, and that unreconcilable disagreement is abundant. Figure 1 shows two examples from CV and NLP. This is particularly true for tasks involving highly subjective judgments, such as hate speech detection (Akhtar et al., 2019 (Akhtar et al., , 2020 or sentiment analysis (Kenyon-Dean et al., 2018) . However, it is not a trivial issue even in more linguistic tasks, such as part-of-speech tagging (Plank et al., 2014) , word sense disambiguation (Passonneau et al., 2012; Jurgens, 2013) , or coreference resolution (Poesio and Artstein, 2005; Recasens et al., 2011) . Systematic disagreement also exists in image classification tasks, where labels may overlap (Rodrigues and Pereira, 2018; Peterson et al., 2019) . Disagreement and task difficulty and subjectivity also challenge traditional agreement measures (Artstein and Poesio, 2008) . High agreement is typically used as a proxy for data quality. However, it obscures possible sources of disagreement (Poesio and Artstein, 2005) . We summarize some of the evidence on disagreement in Section 2.",
"cite_spans": [
{
"start": 275,
"end": 295,
"text": "(Akhtar et al., 2019",
"ref_id": "BIBREF0"
},
{
"start": 296,
"end": 318,
"text": "(Akhtar et al., , 2020",
"ref_id": "BIBREF1"
},
{
"start": 341,
"end": 367,
"text": "(Kenyon-Dean et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 467,
"end": 487,
"text": "(Plank et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 516,
"end": 541,
"text": "(Passonneau et al., 2012;",
"ref_id": "BIBREF24"
},
{
"start": 542,
"end": 556,
"text": "Jurgens, 2013)",
"ref_id": "BIBREF19"
},
{
"start": 585,
"end": 612,
"text": "(Poesio and Artstein, 2005;",
"ref_id": "BIBREF32"
},
{
"start": 613,
"end": 635,
"text": "Recasens et al., 2011)",
"ref_id": "BIBREF34"
},
{
"start": 730,
"end": 759,
"text": "(Rodrigues and Pereira, 2018;",
"ref_id": "BIBREF36"
},
{
"start": 760,
"end": 782,
"text": "Peterson et al., 2019)",
"ref_id": "BIBREF28"
},
{
"start": 881,
"end": 908,
"text": "(Artstein and Poesio, 2008)",
"ref_id": "BIBREF4"
},
{
"start": 1027,
"end": 1054,
"text": "(Poesio and Artstein, 2005)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 125,
"end": 133,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The need for metrics not based on the assumption that a gold standard exists has long been accepted for end-to-end tasks, particularly those involving an aspect of natural language generation, such as conversational agents, machine translation, surface realisation, image captioning, or summarization. Metrics such as BLEU for machine translation/generation, ROUGE for summarization, or NDCG for ranking Web searches all support more than one gold standard reference. Shared tasks in this areas (particularly on paraphrasing), have also considered the role of disagreement in their evaluation metrics (Butnariu et al., 2009; Hendrickx et al., 2013) . Variability in the annotation is a feature of many such tasks (see, e.g., van der Lee et al. (2019) for agreement issues in generated text evaluation) even though many corpora still may come with single references due to data collection costs. High agreement is disfavored, and even bears risks of non-natural, highly homogenized system outputs for generation tasks (Amidei et al., 2018) . The main argument of this position paper is that we should recognize that the same issues, if perhaps in less extreme version, apply to the analysis tasks we discuss here.",
"cite_spans": [
{
"start": 601,
"end": 624,
"text": "(Butnariu et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 625,
"end": 648,
"text": "Hendrickx et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 1017,
"end": 1038,
"text": "(Amidei et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent years, proposals have been put forward to consider the disagreement as informative content that can be leveraged to improve task performance (Plank et al., 2014; Aroyo and Welty, 2015; Jamison and Gurevych, 2015) . Uma et al. (2020) and Basile (2020) investigated the impact of disagreement-informed data on the quality of NLP evaluation, and found it to be beneficial and providing complementary information, as further discussed in Section 3. This led them to organize a first shared task on learning from disagreement and providing non-aggregated benchmarks for evaluation .",
"cite_spans": [
{
"start": 151,
"end": 171,
"text": "(Plank et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 172,
"end": 194,
"text": "Aroyo and Welty, 2015;",
"ref_id": "BIBREF3"
},
{
"start": 195,
"end": 222,
"text": "Jamison and Gurevych, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 225,
"end": 242,
"text": "Uma et al. (2020)",
"ref_id": "BIBREF41"
},
{
"start": 247,
"end": 260,
"text": "Basile (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast with this trend, Bowman and Dahl (2021) recently proposed to study biases and artifacts in data to eliminate them. Beigman Klebanov and Beigman (2009) adopt a slightly softer stance, proposing to only evaluating on \"easy\" (as in, highly agreed upon) instances. Based on the evidence about the prevalence of disagreement in NLP judgments, we argue against this approach. First, it leads to information loss in the attempt to reducing noise in the data. Second, it is unnecessary: while evaluation methods that include disagreement are not yet established, several methodologies already do exist. Removing the disagreement might lead to better evaluation scores, but it fundamentally hides the true nature of the task we are trying to solve.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we outline three possible sources of disagreement. Afterward, we describe how disagreement has been studied in objective and arguably more subjective tasks in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Disagreement in NLP",
"sec_num": "2"
},
{
"text": "Annotation implies an interaction between the human judge, the instance which has to be evaluated, and the moment/context in which the process takes place. For each instance, the annotation outcome depends on these three elements, assuming the task is properly defined, designed, and carried out, e.g., in terms of quality control. We summarize these potential sources of disagreement as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "Individual Differences. World perception is a personal and intrinsically private experience. To some extent, this experience can be traced back to a common ground, but margins of subjectivity remain. These margins are relatively limited when they concern matters of fact, but they snowball when opinions, values, and sentiments come into play. In NLP, many annotation tasks rely on personal opinions and judgment, despite uniform instructions for annotators. For example, in hate speech detection or sentiment analysis, different annotators might have very different perspectives regarding what is hateful or negative, respectively. Individual differences remarkably influence the annotation outcome and, therefore, the disagreement levels. Such individual differences can be partially explained by cultural and socio-demographic norms and variables, such as age, gender, instruction level, or cultural background. However, none of them is sufficient to capture the uniqueness of each subject and their evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "Stimulus Characteristics. Instance characteristics have paramount importance for the annotation as well. Language meaning is often equivocal and carries ambiguities of several kinds: lexical, syntactical, semantic, and others. Humour, for example, often relies on lexical or syntactic ambiguity (Raskin, 1985; Poesio, 2020) . Other genres using deliberate ambiguity as a rhetorical device include poetry (Su, 1994) or political discourse (Winkler, 2015) .",
"cite_spans": [
{
"start": 295,
"end": 309,
"text": "(Raskin, 1985;",
"ref_id": "BIBREF33"
},
{
"start": 310,
"end": 323,
"text": "Poesio, 2020)",
"ref_id": "BIBREF41"
},
{
"start": 404,
"end": 414,
"text": "(Su, 1994)",
"ref_id": "BIBREF39"
},
{
"start": 438,
"end": 453,
"text": "(Winkler, 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "For some instances, more than one label is correct, and the relative annotation task would be better framed as multi-label multi-class, rather than as multi-class tout-court. This is a common scenario in image and text tagging, where several object/features/topics can be present: this layer of complexity is a further potential source of disagreement between coders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "Context. Last but not least, the context matters. The same coder could give different answers at different times to the same questions. The answers change as the subjects' state of mind does, and even factors such as attention slips play a non-negligible role (Beigman Klebanov et al., 2008) . This lack of consistency in human behavior is well known and explored in longitudinal studies, not only in psychology but also in linguistics (Lin and Chen, 2020) .",
"cite_spans": [
{
"start": 260,
"end": 291,
"text": "(Beigman Klebanov et al., 2008)",
"ref_id": "BIBREF7"
},
{
"start": 436,
"end": 456,
"text": "(Lin and Chen, 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "These three aspects suggest that squeezing the human experience and resulting annotation into a set of crisp variables is a gross oversimplification in most cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Disagreement",
"sec_num": "2.1"
},
{
"text": "The NLP community has long been aware that it makes no sense to evaluate natural language generation applications against a hypothetical 'gold' output. These areas have developed specialized training and evaluation methods (Papineni et al., 2002; Lin, 2004) . More surprisingly, disagreements in interpretation have been found to be frequent in annotation projects concerned with apparently more 'objective' aspects of language, such as coreference (Poesio and Artstein, 2005; Recasens et al., 2011) , part-of-speech tagging (Plank et al., 2014) , word sense disambiguation (Passonneau et al., 2012) and semantic role labelling (Dumitrache et al., 2019) , to name a few examples. Even if in these tasks individual instances can be found to be reasonably objective, these findings appear to reflect the existence of extensive and systematic disagreement on what can be concluded from a natural language statement (Pavlick and Kwiatkowski, 2019) .",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF23"
},
{
"start": 247,
"end": 257,
"text": "Lin, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 449,
"end": 476,
"text": "(Poesio and Artstein, 2005;",
"ref_id": "BIBREF32"
},
{
"start": 477,
"end": 499,
"text": "Recasens et al., 2011)",
"ref_id": "BIBREF34"
},
{
"start": 525,
"end": 545,
"text": "(Plank et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 574,
"end": 599,
"text": "(Passonneau et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 628,
"end": 653,
"text": "(Dumitrache et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 912,
"end": 943,
"text": "(Pavlick and Kwiatkowski, 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disagreement in 'Objective' Tasks",
"sec_num": "2.2"
},
{
"text": "Disagreement in annotation has been studied from a particular angle when occurring in highly subjective tasks such as offensive and abusive language detection or hate speech detection. Akhtar et al. (2019) introduced the polarization index, aiming at measuring a particular form of disagreement stemming from clusters of annotators whose opinions on the subjective phenomenon are polarized, e.g., because of different cultural backgrounds. Specifically, polarization measures the ratio between intragroup and inter-group agreement at the individual instance level, capturing the cases where different groups of annotators strongly agree on different labels. In this view, polarization is a somewhat complementary concept to disagreement, whereas a set of annotations could exhibit the latter but not the former, or both. Akhtar et al. (2020) employs this polarization measure to extract alternative gold standards from a dataset annotated with hate speech and train multiple models in order to encode different perspectives on this highly subjec-tive task. While it clearly appears that involving the victims of hate speech in the annotation process helps uncovering implicit manifestations of hatred, the study also shows that the plurality of perspectives is more informative than the mere sum of the annotations.",
"cite_spans": [
{
"start": 185,
"end": 205,
"text": "Akhtar et al. (2019)",
"ref_id": "BIBREF0"
},
{
"start": 821,
"end": 841,
"text": "Akhtar et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Disagreement on 'Subjective' Tasks",
"sec_num": "2.3"
},
{
"text": "While the research mentioned in the previous section questions the assumption that a single 'hard' label (a gold label) exists for every item in a dataset, the models proposed for learning from multiple interpretations are still largely evaluated under this assumption, using 'hard' measures like Accuracy or class-weighted F1 (Plank et al., 2014; Rodrigues and Pereira, 2018) .",
"cite_spans": [
{
"start": 327,
"end": 347,
"text": "(Plank et al., 2014;",
"ref_id": "BIBREF30"
},
{
"start": 348,
"end": 376,
"text": "Rodrigues and Pereira, 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation in Light of Disagreement",
"sec_num": "3"
},
{
"text": "Abandoning the gold standard assumption requires the ability to evaluate a system's output also over instances on which annotators disagree. There is no consensus yet on this form of evaluation, but a few proposals have been used already.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation in Light of Disagreement",
"sec_num": "3"
},
{
"text": "In fact, a way of performing soft evaluation exists which is a natural extension of current practice in NLP. This is to evaluate ambiguity-aware models by treating the probability distribution of labels they produce as a soft label, and comparing that to a full distribution of labels, instead of a 'one-hot' approach. This can be done using, for example, cross-entropy, although other options also exist. This approach was adopted in, inter alia, (Peterson et al., 2019; Uma et al., 2020; . Peterson et al. (2019) tested this approach on image classification tasks, generating the soft label by transforming the item annotation distribution using standard normalization. Uma et al. (2020) employed this form of soft metric evaluation for NLP, also comparing different ways to obtain a soft label from the raw data. They use soft metrics to compare the classifiers' distribution to the human-derived label distributions, complementing traditional hard evaluation measures.",
"cite_spans": [
{
"start": 448,
"end": 471,
"text": "(Peterson et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 472,
"end": 489,
"text": "Uma et al., 2020;",
"ref_id": "BIBREF41"
},
{
"start": 492,
"end": 514,
"text": "Peterson et al. (2019)",
"ref_id": "BIBREF28"
},
{
"start": 672,
"end": 689,
"text": "Uma et al. (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation in Light of Disagreement",
"sec_num": "3"
},
{
"text": "Basile (2020) suggested a more extreme evaluation framework, where a model is required to produce different outputs encoding the individual annotators' labels. The predictions are then individually evaluated against the single annotations, rather than against an aggregated gold standard. This proposal aims at fostering the design of 'inclusive' models with respect to diverse backgrounds in highly subjective tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation in Light of Disagreement",
"sec_num": "3"
},
{
"text": "While evaluating with disagreement is not yet widely adopted, methods for doing so exist. In the rest of this section, we discuss the two aforementioned approaches more in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation in Light of Disagreement",
"sec_num": "3"
},
{
"text": "The objective of SEMEVAL-2021 Task 12 on Learning with Disagreements (LeWiDi) was to provide a unified testing framework for learning from disagreements in NLP and CV using datasets containing information about disagreements for interpreting language and classifying images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SEMEVAL 2021 Campaign",
"sec_num": "3.1"
},
{
"text": "Five well-known datasets for very different NLP and CV tasks were identified, all characterized by a multiplicity of labels for each instance, by having a size sufficient to train state-of-the-art models, and by evincing different characteristics in terms of the crowd annotators and data collection procedure. These include: a dataset of Twitter posts annotated with POS tags collected by Gimpel et al. (2011) , a datasets for humour identification by Simpson et al. (2019) , and two CV datasets on object identification namely the LabelMe (Russell et al., 2008) and CIFAR-10 datasets (Peterson et al., 2019) .",
"cite_spans": [
{
"start": 390,
"end": 410,
"text": "Gimpel et al. (2011)",
"ref_id": "BIBREF13"
},
{
"start": 453,
"end": 474,
"text": "Simpson et al. (2019)",
"ref_id": "BIBREF38"
},
{
"start": 541,
"end": 563,
"text": "(Russell et al., 2008)",
"ref_id": "BIBREF37"
},
{
"start": 586,
"end": 609,
"text": "(Peterson et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The SEMEVAL 2021 Campaign",
"sec_num": "3.1"
},
{
"text": "Both hard evaluation metrics (F1) and soft evaluation metrics (cross-entropy, as discussed in Section 3) were used for evaluation . The results showed that in nearly all cases, models that account for noise and disagreement have the best (lowest) cross-entropy scores. These results are consistent with the findings of Uma et al. (2020) and Peterson et al. (2019) .",
"cite_spans": [
{
"start": 319,
"end": 336,
"text": "Uma et al. (2020)",
"ref_id": "BIBREF41"
},
{
"start": 341,
"end": 363,
"text": "Peterson et al. (2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The SEMEVAL 2021 Campaign",
"sec_num": "3.1"
},
{
"text": "Basile (2020) explored the impact of disagreement caused by polarization on evaluation, focusing on NLP tasks with high levels of subjectivity. They argue that aggregated test sets lead to unfair evaluation concerning the multiple perspectives stemming from the annotator's background. Therefore, they argue for a paradigm shift in NLP evaluation, where benchmarks for highly subjective tasks should consider the diverging opinions of the annotators throughout the entire evaluation pipeline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Highly Subjective Tasks",
"sec_num": "3.2"
},
{
"text": "This proposal is tested with a simulation on synthetic data, where the annotation is conditioned on two input parameters: difficulty (as in general ambiguity of the annotation task) and subjectivity (an annotation bias linked to a predetermined background variable for the annotators). They propose a straightforward evaluation framework that accounts for multiple perspectives on highly subjective phenomena, where multiple models are trained on the annotations provided by individual annotators, and their accuracy is averaged as a final evaluation metric. The findings from the experiment show that subjectivity and ambiguity are discernible signals, as discussed in Section 2. Moreover, it is shown how a perspective-aware framework provides a more stable evaluation for classifiers of highly subjective tasks, very much in line with the results by Uma et al. (2020) .",
"cite_spans": [
{
"start": 853,
"end": 870,
"text": "Uma et al. (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Highly Subjective Tasks",
"sec_num": "3.2"
},
{
"text": "In this position paper, we argue against the current prevalent evaluation practice of comparing against a single truth. This method has allowed automated evaluation, sped up model selection and development, and resulted in good evaluation scores. However, those scores hide the truth about the state of our models: many tasks are complex and subjective. Assuming a single truth for the sake of evaluation amounts to a gross oversimplification of inherently complex matters. We further reject the notion that we should remove annotation noise from datasets. Instead, we propose to embrace the complex and subjective nature of task labels. We show how disagreement from the annotator, the data, and the context, affects even seemingly objective tasks. Research already shows that incorporating this disagreement leads to better training performance. We suggest that it can do the same for evaluation. The datasets already exist, all we need is to use them. It might not produce the same nice high scores we have gotten used to. But it will provide an honest assessment of how good our models are, and do justice to the complexity of the subject we are trying to model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "This research was supported in part by the DALI project, ERC Grant 695662, and the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRA-TOR) and by the Independent Research Fund Denmark (DFF) grants No. 9131-00019B and 9063-00077B. TF and DH are members of the MilaNLP group, and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A new measure of polarization in the annotation of hate speech",
"authors": [
{
"first": "Sohail",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2019,
"venue": "AI*IA 2019 -Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "588--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sohail Akhtar, Valerio Basile, and Viviana Patti. 2019. A new measure of polarization in the annotation of hate speech. In AI*IA 2019 -Advances in Artificial Intelligence, pages 588-603, Cham. Springer Inter- national Publishing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling annotator perspective and polarized opinions to improve hate speech detection",
"authors": [
{
"first": "Sohail",
"middle": [],
"last": "Akhtar",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Human Computation and Crowdsourcing",
"volume": "8",
"issue": "",
"pages": "151--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sohail Akhtar, Valerio Basile, and Viviana Patti. 2020. Modeling annotator perspective and polarized opin- ions to improve hate speech detection. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1):151-154.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rethinking the agreement in human evaluation tasks",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Amidei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3318--3329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Rethinking the agreement in human evaluation tasks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3318-3329, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Truth is a lie: Crowd truth and the seven myths of human annotation",
"authors": [
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "36",
"issue": "",
"pages": "15--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annota- tion. AI Magazine, 36(1):15-24.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Survey article: Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {
"DOI": [
"10.1162/coli.07-034-R2"
]
},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Survey ar- ticle: Inter-coder agreement for computational lin- guistics. Computational Linguistics, 34(4):555- 596.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of the AIXIA Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valerio Basile. 2020. It's the end of the gold standard as we know it. on the impact of pre-aggregation on the evaluation of highly subjective tasks. In Proc. of the AIXIA Workshop. Universit\u00e1 di Torino.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Squibs: From annotator agreement to noise models",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Beigman",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "495--503",
"other_ids": {
"DOI": [
"10.1162/coli.2009.35.4.35402"
]
},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov and Eyal Beigman. 2009. Squibs: From annotator agreement to noise models. Computational Linguistics, 35(4):495-503.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Analyzing disagreements",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Beata Beigman Klebanov",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Beigman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diermeier",
"suffix": ""
}
],
"year": 2008,
"venue": "Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing disagreements. In Col- ing 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 2- 7, Manchester, UK. Coling 2008 Organizing Com- mittee.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "What will it take to fix benchmarking in natural language understanding? arXiv preprint",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "George",
"middle": [
"E"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dahl",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.02145"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman and George E Dahl. 2021. What will it take to fix benchmarking in natural language understanding? arXiv preprint arXiv:2104.02145.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SemEval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Butnariu",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009)",
"volume": "",
"issue": "",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristina Butnariu, Su Nam Kim, Preslav Nakov, Di- armuid \u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2009. SemEval-2010 task 9: The inter- pretation of noun compounds using paraphrasing verbs and prepositions. In Proceedings of the Work- shop on Semantic Evaluations: Recent Achieve- ments and Future Directions (SEW-2009), pages 100-105, Boulder, Colorado. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A survey of 25 years of evaluation",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Hestness",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Engineering",
"volume": "25",
"issue": "6",
"pages": "753--767",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Joel Hestness. 2019. A sur- vey of 25 years of evaluation. Natural Language Engineering, 25(6):753-767.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A crowdsourced frame disambiguation corpus with ambiguity",
"authors": [
{
"first": "Anca",
"middle": [],
"last": "Dumitrache",
"suffix": ""
},
{
"first": "Lora",
"middle": [],
"last": "Aroyo",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Welty",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2164--2170",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1224"
]
},
"num": null,
"urls": [],
"raw_text": "Anca Dumitrache, Lora Aroyo, and Chris Welty. 2019. A crowdsourced frame disambiguation corpus with ambiguity. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2164-2170, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Beyond black & white: Leveraging annotator disagreement via soft-label multi-task learning",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
},
{
"first": "Silviu",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Fornaciari, Silviu Uma, Alexandra Paun, Barbara Plank, Dirk Hovy, and Massimo Poesio. 2021. Beyond black & white: Leveraging annota- tor disagreement via soft-label multi-task learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Part-of-speech tagging for Twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 42-47, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Confer- ence on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SemEval-2013 task 4: Free paraphrases of noun compounds",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation (Se-mEval 2013)",
"volume": "2",
"issue": "",
"pages": "138--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Zornitsa Kozareva, Preslav Nakov, Di- armuid \u00d3 S\u00e9aghdha, Stan Szpakowicz, and Tony Veale. 2013. SemEval-2013 task 4: Free para- phrases of noun compounds. In Second Joint Con- ference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh In- ternational Workshop on Semantic Evaluation (Se- mEval 2013), pages 138-143, Atlanta, Georgia, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning whom to trust with MACE",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1120--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1120-1130, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Handbook of Linguistic Annotation",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nancy Ide and James Pustejovsky, editors. 2017. The Handbook of Linguistic Annotation. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Noise or additional information? leveraging crowdsource annotation item agreement for natural language tasks",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Jamison",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "291--297",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Jamison and Iryna Gurevych. 2015. Noise or additional information? leveraging crowdsource an- notation item agreement for natural language tasks. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 291-297, Lisbon, Portugal. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Embracing ambiguity: A comparison of annotation methodologies for crowdsourcing word sense labels",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "556--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens. 2013. Embracing ambiguity: A com- parison of annotation methodologies for crowdsourc- ing word sense labels. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 556-562, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Sentiment analysis: It's complicated!",
"authors": [
{
"first": "Kian",
"middle": [],
"last": "Kenyon-Dean",
"suffix": ""
},
{
"first": "Eisha",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Fujimoto",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Georges-Filteau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Glasz",
"suffix": ""
},
{
"first": "Barleen",
"middle": [],
"last": "Kaur",
"suffix": ""
},
{
"first": "Auguste",
"middle": [],
"last": "Lalande",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Bhanderi",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Belfer",
"suffix": ""
},
{
"first": "Nirmal",
"middle": [],
"last": "Kanagasabai",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Sarrazingendron",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Verma",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Ruths",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1886--1895",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1171"
]
},
"num": null,
"urls": [],
"raw_text": "Kian Kenyon-Dean, Eisha Ahmed, Scott Fujimoto, Jeremy Georges-Filteau, Christopher Glasz, Barleen Kaur, Auguste Lalande, Shruti Bhanderi, Robert Belfer, Nirmal Kanagasabai, Roman Sarrazingen- dron, Rohit Verma, and Derek Ruths. 2018. Senti- ment analysis: It's complicated! In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1886-1895, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Understanding writing quality change: A longitudinal study of repeaters of a high-stakes standardized english proficiency test",
"authors": [
{
"first": "Michelle",
"middle": [
"Y"
],
"last": "You-Min Lin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Language Testing",
"volume": "37",
"issue": "4",
"pages": "523--549",
"other_ids": {
"DOI": [
"10.1177/0265532220925448"
]
},
"num": null,
"urls": [],
"raw_text": "You-Min Lin and Michelle Y. Chen. 2020. Understand- ing writing quality change: A longitudinal study of repeaters of a high-stakes standardized english profi- ciency test. Language Testing, 37(4):523-549.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multiplicity and word sense: evaluating and learning from multiply labeled word sense annotations. Language Resources and Evaluation",
"authors": [
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Bhardwaj",
"suffix": ""
},
{
"first": "Ansaf",
"middle": [],
"last": "Salleb-Aouissi",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Ide",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "46",
"issue": "",
"pages": "219--252",
"other_ids": {
"DOI": [
"10.1007/s10579-012-9188-x"
]
},
"num": null,
"urls": [],
"raw_text": "Rebecca J. Passonneau, Vikas Bhardwaj, Ansaf Salleb- Aouissi, and Nancy Ide. 2012. Multiplicity and word sense: evaluating and learning from multi- ply labeled word sense annotations. Language Re- sources and Evaluation, 46(2):219-252.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The benefits of a model of annotation",
"authors": [
{
"first": "Rebecca",
"middle": [
"J"
],
"last": "Passonneau",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Carpenter",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "187--195",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca J. Passonneau and Bob Carpenter. 2013. The benefits of a model of annotation. In Proceedings of the 7th Linguistic Annotation Workshop and Interop- erability with Discourse, pages 187-195, Sofia, Bul- garia. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Comparing Bayesian models of annotation. Transactions of the Association for",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Silviu Paun",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Carpenter",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "571--585",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00040"
]
},
"num": null,
"urls": [],
"raw_text": "Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing Bayesian models of annotation. Trans- actions of the Association for Computational Lin- guistics, 6:571-585.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Inherent disagreements in human textual inferences",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "677--694",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00293"
]
},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transac- tions of the Association for Computational Linguis- tics, 7:677-694.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Human uncertainty makes classification more robust",
"authors": [
{
"first": "Joshua",
"middle": [
"C"
],
"last": "Peterson",
"suffix": ""
},
{
"first": "Ruairidh",
"middle": [
"M"
],
"last": "Battleday",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths, and Olga Russakovsky. 2019. Human un- certainty makes classification more robust. 2019",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "IEEE/CVF International Conference on Computer Vision (ICCV)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "9616--9625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/CVF International Conference on Computer Vision (ICCV), pages 9616-9625.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Linguistically debatable or just plain wrong?",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "507--511",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2083"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Linguistically debatable or just plain wrong? In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 507-511, Baltimore, Maryland. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The Companion to Semantics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio. 2020. Ambiguity. In Daniel Gutz- mann, Lisa Matthewson, and C\u00e9cile Meier and Hotze Rullmann and Thomas Ede Zimmermann, ed- itors, The Companion to Semantics. Wiley.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The reliability of anaphoric annotation, reconsidered: Taking ambiguity into account",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL Workshop on Frontiers in Corpus Annotation",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio and Ron Artstein. 2005. The relia- bility of anaphoric annotation, reconsidered: Taking ambiguity into account. In Proc. of ACL Workshop on Frontiers in Corpus Annotation, pages 76-83.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Semantic Mechanisms of Humor. D. Reidel, Dordrecht and Boston",
"authors": [
{
"first": "",
"middle": [],
"last": "Victor Raskin",
"suffix": ""
}
],
"year": 1985,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Raskin. 1985. Semantic Mechanisms of Humor. D. Reidel, Dordrecht and Boston.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Identity, non-identity, and near-identity: Addressing the complexity of coreference",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Ed",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [
"Antonia"
],
"last": "Mart\u00ed",
"suffix": ""
}
],
"year": 2011,
"venue": "Lingua",
"volume": "121",
"issue": "6",
"pages": "1138--1152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Ed Hovy, and M. Antonia Mart\u00ed. 2011. Identity, non-identity, and near-identity: Ad- dressing the complexity of coreference. Lingua, 121(6):1138-1152.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "11 evaluation of nlp systems. The handbook of computational linguistics and natural language processing",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik and Jimmy Lin. 2010. 11 evaluation of nlp systems. The handbook of computational lin- guistics and natural language processing, 57.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep learning from crowds",
"authors": [
{
"first": "Filipe",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "Francisco",
"middle": [
"C"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filipe Rodrigues and Francisco C. Pereira. 2018. Deep learning from crowds. In AAAI Conference on Arti- ficial Intelligence.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "LabelMe: A database and Web-based tool for image annotation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Bryan",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"P"
],
"last": "Torralba",
"suffix": ""
},
{
"first": "William",
"middle": [
"T"
],
"last": "Murphy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Freeman",
"suffix": ""
}
],
"year": 2008,
"venue": "International Journal of Computer Vision",
"volume": "77",
"issue": "",
"pages": "157--173",
"other_ids": {
"DOI": [
"10.1007/s11263-007-0090-8"
]
},
"num": null,
"urls": [],
"raw_text": "Bryan C. Russell, Antonio Torralba, Kevin P. Mur- phy, and William T. Freeman. 2008. LabelMe: A database and Web-based tool for image annotation. International Journal of Computer Vision, 77:157- 173.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Predicting humorousness and metaphor novelty with Gaussian process preference learning",
"authors": [
{
"first": "Edwin",
"middle": [],
"last": "Simpson",
"suffix": ""
},
{
"first": "Erik-L\u00e2n Do",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5716--5728",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1572"
]
},
"num": null,
"urls": [],
"raw_text": "Edwin Simpson, Erik-L\u00e2n Do Dinh, Tristan Miller, and Iryna Gurevych. 2019. Predicting humorousness and metaphor novelty with Gaussian process prefer- ence learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 5716-5728, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Lexical Ambiguity in Poetry. Longman",
"authors": [
{
"first": "Soon",
"middle": [
"P"
],
"last": "Su",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soon P. Su. 1994. Lexical Ambiguity in Poetry. Long- man, London.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Semeval-2021 task 12: Learning with disagreements",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
},
{
"first": "Anca",
"middle": [],
"last": "Dumitrache",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Fifteenth Workshop on Semantic Evaluation. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Uma, Tommaso Fornaciari, Anca Dumitra- che, Tristan Miller, Jon Chamberlain, Barbara Plank, and Massimo Poesio. 2021. Semeval-2021 task 12: Learning with disagreements. In Proceedings of the Fifteenth Workshop on Semantic Evaluation. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A case for soft-loss functions",
"authors": [
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Silviu",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing",
"volume": "",
"issue": "",
"pages": "173--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Sil- viu Paun, Barbara Plank, and Massimo Poesio. 2020. A case for soft-loss functions. In Proceedings of the 8th AAAI Conference on Human Computation and Crowdsourcing, pages 173-177.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Best practices for the human evaluation of automatically generated text",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Van Der Lee",
"suffix": ""
},
{
"first": "Albert",
"middle": [],
"last": "Gatt",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "Emiel",
"middle": [],
"last": "Wubben",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Krahmer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 12th International Conference on Natural Language Generation",
"volume": "",
"issue": "",
"pages": "355--368",
"other_ids": {
"DOI": [
"10.18653/v1/W19-8643"
]
},
"num": null,
"urls": [],
"raw_text": "Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355-368, Tokyo, Japan. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Ambiguity: Language and Communication",
"authors": [],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susanne Winkler, editor. 2015. Ambiguity: Language and Communication. De Gruyter.",
"links": null
}
},
"ref_entries": {}
}
}