|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:10:53.203403Z" |
|
}, |
|
"title": "Modeling Ambiguity with Many Annotators and Self-Assessments of Annotator Certainty", |
|
"authors": [ |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Andresen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Stuttgart", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Vauth", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Heike", |
|
"middle": [], |
|
"last": "Zinsmeister", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Hamburg", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Most annotation efforts assume that annotators will agree on labels, if the annotation categories are well-defined and documented in annotation guidelines. However, this is not always true. For instance, content-related questions such as 'Is this sentence about topic X?' are unlikely to elicit the same answer from all annotators. Additional specifications in the guidelines are helpful to some extent, but can soon get overspecified by rules that cannot be justified by a research question. In this study, we model the semantic category 'illness' and its use in a gradual way. For this purpose, we (i) ask many annotators (30 votes per item, 960 items) for their opinion in a crowdsourcing experiment, (ii) ask annotators to indicate their certainty with respect to their annotation, and (iii) compare this across two different text types. We show that results of multiple annotations and average annotator certainty correlate, but many ambiguities can only be captured if several people contribute. The annotated data allow us to filter for sentences with high or low agreement and analyze causes of disagreement, thus getting a better understanding of people's perception of illness-as an example of a semantic category-as well as of the content of our annotated texts.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Most annotation efforts assume that annotators will agree on labels, if the annotation categories are well-defined and documented in annotation guidelines. However, this is not always true. For instance, content-related questions such as 'Is this sentence about topic X?' are unlikely to elicit the same answer from all annotators. Additional specifications in the guidelines are helpful to some extent, but can soon get overspecified by rules that cannot be justified by a research question. In this study, we model the semantic category 'illness' and its use in a gradual way. For this purpose, we (i) ask many annotators (30 votes per item, 960 items) for their opinion in a crowdsourcing experiment, (ii) ask annotators to indicate their certainty with respect to their annotation, and (iii) compare this across two different text types. We show that results of multiple annotations and average annotator certainty correlate, but many ambiguities can only be captured if several people contribute. The annotated data allow us to filter for sentences with high or low agreement and analyze causes of disagreement, thus getting a better understanding of people's perception of illness-as an example of a semantic category-as well as of the content of our annotated texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Natural language is full of phenomena of ambiguity and uncertainty. However, we do not yet have a standard procedure for integrating ambiguities in formal models-or even for identifying ambiguities in the first place. Most supervised machine learning tasks assume that there is a ground truth-an intersubjectively correct annotation. Algorithms rely on this as unambiguous training data. For the most part, annotation efforts assume that multiple annotators will agree on labels, if annotation categories are welldefined and the annotation guidelines are clear and comprehensive. Hence, low agreement scores are considered to indicate poor data quality. However, there is also an increasing awareness that this is not always the case (Poesio and Artstein, 2005; Beigman Klebanov et al., 2008; Morris, 2010; Rohde et al., 2016; Amidei et al., 2018; Pavlick and Kwiatkowski, 2019) . This is backed up by findings in cognitive science and linguistics that suggest that language phenomena are predominantly gradual in nature instead of being discrete categories, for instance in prototype theory (Lakoff, 1987) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 734, |
|
"end": 761, |
|
"text": "(Poesio and Artstein, 2005;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 762, |
|
"end": 792, |
|
"text": "Beigman Klebanov et al., 2008;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 793, |
|
"end": 806, |
|
"text": "Morris, 2010;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 807, |
|
"end": 826, |
|
"text": "Rohde et al., 2016;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 827, |
|
"end": 847, |
|
"text": "Amidei et al., 2018;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 878, |
|
"text": "Pavlick and Kwiatkowski, 2019)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1092, |
|
"end": 1106, |
|
"text": "(Lakoff, 1987)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Two common possible ways of capturing ambiguity in text are asking more than one annotator to annotate a category and/or asking annotators to explicitly mark ambiguities or uncertainties. In the study conducted for this paper, we combine both strategies to model the semantic category 'illness' and its use in a gradual way. Illness lends itself to this type of analysis, because it is a highly socially constructed concept that makes ambiguities and disagreements likely. Furthermore, this topic is of interest to linguists, humanists and social scientists alike and thus facilitated cooperation in our project hermA (Gaidys et al., 2017) . We use crowdsourcing in order to get the input of many annotators and use their overall vote as a measure of how clearly a sentence belongs to the topic 'illness'. In addition, we ask annotators to indicate their certainty with respect to their annotation. Our aim is to investigate to what extent multiple annotations and self-assessments yield similar results, deriving recommendations for future research. Furthermore, we identify reasons for disagreement in a qualitative analysis contrasting controversial and non-controversial sentences. Our data comprise sentences from German literary texts and transcripts of political debates, thus allowing for a comparison of text types.", |
|
"cite_spans": [ |
|
{ |
|
"start": 618, |
|
"end": 639, |
|
"text": "(Gaidys et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this section we present previous research dealing with ambiguity in annotation settings with multiple annotators. With regard to agreement and reliability, Potter and Levine-Donnerstein (1999) (in the context of content analysis) differentiate three types of content to be annotated: 'manifest content (directly observable events), pattern latent content (events that need to be inferred indirectly from the observations), and projective latent content (loosely said, events that require a subjective interpretation from the annotator)' (Reidsma and op den Akker, 2008, 8, summarizing Potter and Levine-Donnerstein, 1999, 261) . The type of question asked in this paper ('Is this sentence about topic X?') involves projective latent content and some disagreement is to be expected. When annotating something of which people have an everyday understanding, it can be more helpful to rely on the 'coders' existing schema' instead of defining more and more detailed annotation rules (Potter and Levine-Donnerstein, 1999, 260) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 540, |
|
"end": 552, |
|
"text": "(Reidsma and", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 598, |
|
"text": "op den Akker, 2008, 8, summarizing Potter and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 599, |
|
"end": 629, |
|
"text": "Levine-Donnerstein, 1999, 261)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 983, |
|
"end": 1025, |
|
"text": "(Potter and Levine-Donnerstein, 1999, 260)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our study, we combine multiple annotations and self-assessment of the annotators' certainty, as has been done before by Poesio and Artstein (2005) . They study the annotation of anaphora in dialogue data, where many references are vague without leading to misunderstandings. (See also Versley (2006) for a detailed analysis of causes of ambiguities in coreference annotation.) The study attempts to capture ambiguities by, on the one hand, letting 18 students annotate the same text and, on the other hand, giving annotators the option of marking more than one antecedent if they perceive the reference as ambiguous. They conclude that it is important to consider cases of 'implicit ambiguity' which only emerge in the disagreements of multiple annotators and the individual annotator is not aware of. Nedoluzhko and M\u00edrovsk\u00fd (2013) report on the annotation of coreference and bridging relations in the Prague Dependency Treebank. They let annotators explicitly mark how certain they were about each annotated item. The analysis shows a correlation between annotator certainty and agreement, but also reveals many cases of disagreement despite high certainty. They agree with Poesio and Artstein (2005) that ambiguity can be captured more fully by using multiple annotators instead of only letting annotators mark it explicitly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 149, |
|
"text": "Poesio and Artstein (2005)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 302, |
|
"text": "Versley (2006)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 805, |
|
"end": 835, |
|
"text": "Nedoluzhko and M\u00edrovsk\u00fd (2013)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1179, |
|
"end": 1205, |
|
"text": "Poesio and Artstein (2005)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Further studies consider disagreements between annotators as valuable information. Rohde et al. (2016) conduct a crowdsourcing experiment to study which discourse adverbials licence which conjunctions (e. g. a clause with instead can start with the conjunctions but, so, or and). 28 annotators were presented with sentences with one of 20 adverbials and a gap for a possible conjunction. The results show that all adverbials have one to three conjunctions that participants considered acceptable, depending on context as well as individual preference. This variability could only be captured by a high number of annotators per item (similar: Scholman and Demberg, 2017). Morris and Hirst (2004) investigate subjectivity in text interpretation. They let five annotators identify semantically related word groups in a text and specify the semantic relations between the words. While the annotators agree on some core words, individual differences are large. Morris (2010) extends this method and concludes that '40% of the lexical cohesion perceived in text is subjectively interpreted' (Morris, 2010, 141) and therefore argues for a more reader-oriented modeling of text in computational linguistics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 102, |
|
"text": "Rohde et al. (2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 671, |
|
"end": 694, |
|
"text": "Morris and Hirst (2004)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 956, |
|
"end": 969, |
|
"text": "Morris (2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1085, |
|
"end": 1104, |
|
"text": "(Morris, 2010, 141)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Annotation with multiple annotators has been explored in literary studies, as polyvalence is an important textual characteristic for the definition of literariness. Gius and Jacke (2017) pose the question of how the falsifiability of interpretations can be guaranteed. Their proposal is to use computer-aided narratological annotation to document interpretative decisions, which in turn can make conflicting interpretations visible. While some textual ambiguities do not have consequences for the overall interpretation, some ambiguities result in more than one possible interpretation of a literary text. Hammond et al. (2013) target ambiguity in To the Lighthouse by Virginia Woolf. The novel makes extensive use of free indirect speech that cannot be attributed unambiguously to one character or the narrator. To capture possible attributions, they let three to four student annotators analyze the same text span. They reach a raw agreement of slightly less than 70%, however, for many text spans more than one analysis is valid.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 186, |
|
"text": "Gius and Jacke (2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 606, |
|
"end": 627, |
|
"text": "Hammond et al. (2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Multiple annotations have also been exploited for machine learning. Plank et al. (2014) use the information of annotator agreement to improve POS-tagging. In training their classification model, they sanction mistakes on tags with high inter-annotator agreement more heavily than on tags with low agreement. Their experiments result in annotation improvements in several evaluation settings. Reidsma and op den Akker (2008) optimize their classifiers for high precision by allowing the classifier to make no decision on low-agreement parts of the data. Pavlick and Kwiatkowski (2019) work on entailment, i. e. the question whether the proposition of a sentence B can be inferred from the proposition of a sentence A. They ask 50 annotators for their judgment on several hundred sentence pairs and show that the disagreement in the annotations cannot be attributed to noise only, but indicates that different interpretations are possible for many sentence pairs. They argue that computational models for textual entailment should therefore produce a full distribution of possible human answers instead of just one aggregated score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 87, |
|
"text": "Plank et al. (2014)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 392, |
|
"end": 423, |
|
"text": "Reidsma and op den Akker (2008)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 553, |
|
"end": 583, |
|
"text": "Pavlick and Kwiatkowski (2019)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a previous study of our own project, Adelmann et al. (2019) annotated illness and compared an approach based on semantic fields with manual annotations. The agreement between the two annotators was rather low. The attempt to rectify this problem by improving the annotation guidelines led to the inclusion of very detailed rules that for the most part could not be justified by the demands of the research questions. In truth, illness is simply not a discrete concept, but can be present in a sentence to varying degrees. For this reason, we decided to approach the annotation of illness in a different way. We conducted a crowdsourcing study on the decision task whether a sentence is about illness or not. Similar to Poesio and Artstein (2005) and Nedoluzhko and M\u00edrovsk\u00fd (2013) , we combine multiple annotators with a self-assessment of annotator certainty. Closer to Pavlick and Kwiatkowski (2019), we harvested a statistically relevant number of 30 judgements per sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 722, |
|
"end": 748, |
|
"text": "Poesio and Artstein (2005)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 753, |
|
"end": 783, |
|
"text": "Nedoluzhko and M\u00edrovsk\u00fd (2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Data. The sentences that were presented to the participants were extracted from two corpora that were compiled in the digital humanities project hermA (Gaidys et al., 2017) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 172, |
|
"text": "(Gaidys et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Fiction Corpus: 40 novels from the dystopian genre (2000-2019), 135,000 sentences \u2022 Transcript Corpus: written versions of speeches from the German federal parliament ('Bundestag'), filtered for texts that cover an aspect of the topic of telemedicine, 990,000 sentences", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We chose these two corpora because we wanted to cover different text types that we assume to differ with respect to ambiguity: Literary texts are said to be especially ambiguous and authors play with ambiguity to create artistic value. Political speeches aim more at a common understanding and should be as clear as possible, but can also be deliberately ambiguous. Both corpora were split into sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Sentence Selection. Sentences were sampled randomly. To ensure that the topic illness was present in a relatively large proportion of the sample, we filtered for sentences that contain a lexical item from the 'semantic field' (Lehrer, 1974; Vassilyev, 1974) of illness, which we realized as hyponyms of the term illness in GermaNet (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010 ). 2 For each corpus, we included 380 sentences with a lexical item related to illness and 100 without such a word, resulting in 960 items in total.", |
|
"cite_spans": [ |
|
{ |
|
"start": 226, |
|
"end": 240, |
|
"text": "(Lehrer, 1974;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 257, |
|
"text": "Vassilyev, 1974)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 332, |
|
"end": 356, |
|
"text": "(Hamp and Feldweg, 1997;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 383, |
|
"text": "Henrich and Hinrichs, 2010", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Question Design. The selected sentences were presented to the annotators with a context of five sentences before and after the target sentence. Less context was included if the text began or ended in this span. The target sentence was printed in bold. The annotators were presented with two or. three questions: The first one asked for a binary judgment in response to the question: Is the topic of illness discussed in the sentence printed in bold? ('Wird im fett gedruckten Satz das Thema Krankheit thematisiert?'). Given the condition that the answer to this question was 'yes', there was a follow-up question about topic centrality: Annotators were asked whether illness is the central or rather a marginal topic of the sentence. The final question asked for the annotators' certainty: How certain are you about the answer to question 1? ('Wie sicher bist Du Dir bei der Antwort zu Frage 1?'). Possible answers were very certain ('sehr sicher'), rather certain ('eher sicher'), rather uncertain ('eher unsicher'), and very uncertain ('sehr unsicher').", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Guidelines. As already mentioned above, the goal of this study was not to review strictly formalized annotation guidelines nor the creation of a consensual gold standard. Since we want to depict ambiguities through annotations, we decided to use extremely minimalist guidelines. In the task description, we even pointed out that we are interested in the subjective assessments of the annotators. However, we gave three example sentences: In the first, illness is the central topic of the sentence (a), in the second, illness is a marginal topic (b), and in the third, illness is not discussed (c). (a) Frank liegt schon seit einer Woche mit Fieber im Bett. 'Frank has been in bed with a fever for a week.' (b) Ich freue mich sehr darauf, [. . . ] meine Cousine zu treffen, die lange erk\u00e4ltet war.", |
|
"cite_spans": [ |
|
{ |
|
"start": 738, |
|
"end": 746, |
|
"text": "[. . . ]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "'I am very much looking forward to meeting my cousin [. . . ] , who has had a cold for a long time-.' (c) Zum Fr\u00fchst\u00fcck esse ich am liebsten M\u00fcsli.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 61, |
|
"text": "[. . . ]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "'For breakfast I prefer to eat cereal.' The question whether a sentence deals with illness is an individual, conceptual decision. By using minimalist guidelines, we hope to cover disagreements caused by conceptual differences as to what the annotators consider to be illness as well as disagreements caused by grammar or style.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Annotation Procedure. We collected 30 judgments for each of the 960 items. For the annotation procedure, we used the crowdsourcing platform Appen The crowdworkers were paid $0.70 for each annotated page with ten sentences each. Based on a pretest, we assumed that this would result in more than a minimum wage of $10 for annotators working at average speed. During the annotation process it became apparent that the crowdsourcing platform could not supply us with a sufficient number of German speaking annotators. For this reason we additionally asked the students at our university to participate in the study. The students were paid e0.80 per page.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "For purposes of quality control, a number of test questions were used to exclude annotators who either did not understand the task or did not intend to work seriously on the task. As test questions we selected sentences that we considered unambiguous with respect to the question whether they are about illness. However, some of the test sentences turned out to be more ambiguous than we expected. If the annotators gave arguments for their deviant answer, and thus showed that they were actively engaging with the task, we accepted their answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In total, 77 annotators participated in our study, 34 crowdworkers and 43 students. The extent of their work varies widely, ranging between 9 and 828 items. On average, participants annotated 374 items, with a huge standard deviation of 305 items. (See Figure 1 for the full distribution.) About half of our data set was annotated by crowdworkers, the other half by students. In order to identify a possible effect of annotator types, we compared the proportion of votes for illness per annotator from the two groups. There is a significant difference in the annotations of the sentences from the Fiction Corpus with semantic field words (n = 380, Mann-Whitney U test, U = 259.0, p< 0.001, students mean 0.64\u00b10.14, crowdworker mean 0.57\u00b10.07, rank-biserial correlation: \u22120.55). One possible reason for this difference is the fact that most of the students are studying language and literature and thus might approach the task differently than the crowdworkers. We do not consider this effect problematic and do not account for it in the analysis. There is no significant difference in the proportion of 'very sure' votes to the question on the annotators' certainty. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 261, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation Experiment 1", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Distribution of Annotation Categories. Figure 2a shows the distribution of answers to the first question: 'Is this sentence about illness?'. A value of 0 on the x axis means that all 30 annotators said that this sentence is not about illness, a value of 1 means that all 30 annotators said that the sentence is about illness. We can see that for most of the sentences without a semantic field word (orange bars), all annotators agreed that this sentence is not about illness. Only 21 of 200 sentences got at least some votes for illness. On the other hand, responses for sentences with a semantic field word are distributed much more widely (blue bars). About one quarter of the sentences is unanimously considered to be about illness (181), another quarter is unanimously considered not to be about illness (192) . About half of all sentences with a semantic field word caused some degree of disagreement between the annotators. We conclude that the absence of a semantic field word is a good indicator that the sentence is not about illness. However, the presence of a semantic field word still leaves us with about a 50:50 chance for the sentence to be about illness or not. This decision appears to be non-trivial for human annotators.", |
|
"cite_spans": [ |
|
{ |
|
"start": 808, |
|
"end": 813, |
|
"text": "(192)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 48, |
|
"text": "Figure 2a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The second question ('How certain are you about your answer to question 1?') reveals that the annotators were, overall, rather confident about their answers: When looking at all 28,800 answers to this question, 81.9% of the time the annotators said they were 'very certain' about their assessment of the sentence, in 16.5% they were 'rather certain'. Only in very rare cases did the annotators indicate that they were 'rather uncertain' (1.5%) or even 'very uncertain' (0.2%). Aggregated to sentences this results in the distribution in Figure 2b . Despite the high number of 'very certain' votes overall, only 49 of 760 sentences with a semantic field word and 69 of 200 sentences without a semantic field word get 'very certain' votes only. This is because the annotators had very individual certainty profiles: The proportion of 'very sure' votes per annotator ranges between 0 (one annotator) and 1 (eight annotators), with a mean of 0.78 \u00b1 0.20. In 362 cases an annotator (55 different annotators) asserted to be very sure about the topic annotation, but annotated against the majority vote. 17 different annotators declared at least once to be very sure while all other annotators were of the opposite opinion.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 537, |
|
"end": 546, |
|
"text": "Figure 2b", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Agreement. For the calculation of inter-annotator agreement, we use the coefficient Krippendorff's \u03b1 (Krippendorff, 1980; Krippendorff, 2013) 3 . This coefficient does not require all items to be annotated by the same annotators (see Artstein and Poesio, 2008 , for an overview). Following the recommendations of Artstein (2017, 304), we additionally calculate the agreement scores for our two subcorpora and two conditions (semantic field word vs. non semantic field word) individually.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 121, |
|
"text": "(Krippendorff, 1980;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 141, |
|
"text": "Krippendorff, 2013)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 259, |
|
"text": "Artstein and Poesio, 2008", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Regarding the first question ('Is this sentence about illness?'), the agreement for the full data set is 0.690. This value indicates substantial agreement (Landis and Koch, 1977) . As Figure 2a suggests, the agreement varies depending on the presence of a semantic field word: The agreement on all sentences without a semantic field word is very high with 0.896. Sentences with a semantic field word achieve a much lower agreement of 0.658. This can be explained by the fact that most sentences without a semantic field word are totally unrelated to illness, making the question a trivial one. There is also a moderate difference in agreement between the two corpora: The agreement for the sentences of the Transcript Corpus is 0.756 and the agreement for the sentences of the Fiction Corpus is 0.637.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 178, |
|
"text": "Koch, 1977)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 193, |
|
"text": "As Figure 2a", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We did not calculate the agreement for the second question ('How certain are you about your answer to question 1?'), because in answering this question, the annotators do not judge the text shared by all annotators, but their individual annotation experience.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Overall, the agreement scores show a high annotation consistency, given the fact that the annotators did not get further instructions by annotation guidelines. There is a core idea of illness that is shared by most annotators. At the same time we see a considerable number of sentences where the annotators disagree. These sentences can be said to cover the peripheral understanding of illness that is only shared by subgroups of annotators. In the following section we will look more closely at how this relates to annotator certainty and possible causes of disagreement in the sentences themselves.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Topic Centrality. Annotators who said the sentence was about illness had to additionally specify whether illness is the central topic or only a marginal topic of the sentence. For this question, our annotators reach an agreement of 0.246, which is hardly above chance level. For this reason we will not analyze the data for this question in detail. The answers are correlated with the second question about annotator certainty: If the annotators considered illness the central topic, 83% were 'very certain' about their answer. If illness was only a marginal topic, only 58% were 'very certain'. We therefore assume that the (lacking) centrality of the topic is one of many possible reasons for uncertainty. The comprehensive assessment of causes of uncertainty would require a more complex question design.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Disagreement and Uncertainty. Our hypothesis was that if many annotators indicate a high uncertainty, the agreement for these items would be low. For the measurement of agreement per item we chose the highest proportion of participants that agree on one answer. As we have two categories, this is a value between 0.5 and 1 with 0.5 indicating that half of our annotators chose one answer and the other half the other answer, 1 indicating that all annotators agree (on either category). 4 Figure 3 shows the relationship between the agreement proportion on question 1 ('Is this sentence about illness?') on the y axis and the proportion of participants that were very certain about their answer on the x axis. Every data point is one possible value combination and the size corresponds to the number of sentences that match this value combination (between 1 and 129). We can see a clear correlation that is confirmed by a Pearson's correlation coefficient of 0.68 (p < 0.001): 5 If many participants were unsure about their answer, there is also much variation in their answers.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 496, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The correlation indicates that a considerable amount of variation can be captured by self-assessment of the annotators. However, there still remains a substantial amount of variation that is only captured Figure 3 : Relationship between the proportion of participants that agree on an answer and the proportion of participants indicating to be 'very certain' (n=960) by the combination of multiple annotators. We also have to keep in mind that, while the proportion of annotators that are very certain correlates with the answers of the group, this must not be true for the individual annotators. If the annotation targets a phenomenon where ambiguity is expected and the research question makes it desirable to capture it, multiple annotators are highly beneficial.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 205, |
|
"end": 213, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Reasons for Disagreement. To explore the reasons for disagreement, we evaluate the semantic field words and the sentences on which the annotators disagreed most. We calculate the average agreement of all sentences in which the semantic field words occur, see Table 1 . Among the semantic field words that occur in sentences with low average agreement is a group of words which refer to psychological states: Paranoia ('paranoia'), Wahn ('madness'), Traumata ('trauma'), Sucht ('addiction'), Schock ('shock') and Anfall ('seizure'). We assume that the low agreement values in sentences with these terms are caused by different opinions about whether these states have the status of a disease. Additionally, some of the terms describe only short-lived states that therefore have a debatable status: Anfall ('seizure'), Schock ('shock') and Husten ('cough'). We additionally inspected the ten words with the highest average agreement. For five of these words the annotators (almost) agreed that the sentences deal with illness. These words are specific names of diseases: Tuberkulose ('tuberculosis'), Diabetes ('diabetes'), Malaria ('malaria'), Krebs ('cancer'), Leuk\u00e4mie ('leukemia'). For the other five terms, the annotators agreed that they do not address any disease: Flechten ('lichens'), Attacke(n) ('attack(s)'), Verdr\u00e4ngung ('repression'), and Abh\u00e4ngigkeiten ('dependencies') are highly ambiguous, because they can describe a pathological state, but also have a completely separate semantic dimension.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 259, |
|
"end": 266, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to determine what causes disagreement above the lexical level, we manually inspected the 50 sentences with the lowest agreement scores and 50 random sentences with complete agreement. 38 of the sentences with low agreement belong to the Fiction Corpus. Some disagreements can be explained by the lexical phenomena described before: mental phenomena and short-lived states. In addition, some grammatical and stylistic phenomena prove to be important. One example are negations, which are either explicitly marked by a negator as in example (1), but can also be realized in a syntactically more complex way, as example (2) shows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(1) Kein Husten, kein Lebenszeichen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "'No cough, no sign of life.' (2) Das hei\u00dft jedoch auch, dass beispielsweise eine Frau, die vor vierunddrei\u00dfig Jahren geboren wurde, keine pers\u00f6nlichen Erinnerungen an k\u00f6rperliches Leiden besitzt. Table 1 : Semantic field words with the lowest and highest agreement scores 'However, this also means that, for instance, a woman born forty-three years ago has no personal memories of physical suffering.'", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 203, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Among the stylistic phenomena, metaphorical uses of disease symptoms in a non-medical context are the most frequent:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(3) Diese Schizophrenie findet sich auch in der\u00d6ffentlichkeit. 'This schizophrenia is also found in public.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Finally, there are cases where the narrative representation of events may have led to low agreement values. In some sentences, the narrative instance makes speculative statements (example (4)) or presents the perspective of a narrated character (example (5)):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(4) Das Ergebnis mochte dann am Ende ein Wahn sein, wie er Branagorn befallen hatte. 'In the end, the result might have been a delusion, as it had infested Branagorn.' (5) Man erkl\u00e4rte es sich dann teils mit dem Schockzustand des Kindes und teils mit der erst heraufziehenden D\u00e4mmerung [...] . 'This was explained partly by the child's state of shock and partly by the approaching dawn [...] .'", |
|
"cite_spans": [ |
|
{ |
|
"start": 286, |
|
"end": 291, |
|
"text": "[...]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 386, |
|
"end": 391, |
|
"text": "[...]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The examples also show that several features which potentially cause disagreement among annotators can co-occur in a single sentence. Thus, in example (5) the behaviour of a character is attributed to a temporary state of shock and at the same time the narrator distances himself from this explanation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Influence of Text Type. We have seen above that the inter-annotator agreement for the sentences from the Transcript Corpus (0.756) was higher than for those from the Fiction Corpus (0.637). In Figure 4 we compare the two corpora from two additional perspectives: Figure 4a shows one box plot per corpus for the agreement per sentence, measured as the highest proportion of participants that agree. Visual inspection gives an indication that the annotators disagreed more in the literary texts than in the debate transcripts on the question of whether illness was addressed in the sentences. The mean agreement is 0.91 (\u00b10.13) for the Fiction Corpus sentences and 0.96 (\u00b10.09) for the Transcript Corpus sentences. The Mann-Whitney rank test confirms that this is a significant difference (U = 89024.0, p < 0.001).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 193, |
|
"end": 201, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 272, |
|
"text": "Figure 4a", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "With a rank-biserial correlation of 0.23, the effect size is rather small. As Figure 4b shows, there are also differences between the corpora in how confident the annotators are. The mean proportion of 'very certain' annotators is 0.78(\u00b10.17) for the Fiction Corpus sentences and 0.86 (\u00b10.13) for the Transcript Corpus sentences. This is a statistically significant difference (U = 83232.5, p > 0.001) and the effect size is slightly larger (rank-biserial correlation: 0.28).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 87, |
|
"text": "Figure 4b", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The differences between the corpora can be explained by the vocabulary associated with the two text types. While the sentences for both corpora were selected by the same semantic field, the corpora largely cover different parts of the semantic field: Of 183 semantic field word types in our sentences, only 50 occur in sentences of both corpora. Table 2 presents the most common semantic field words for the two corpora. Only word forms of the general term Krankheit ('illness') are frequent in both corpora. Among the most frequent semantic field words in the transcripts are many abstract terms that have very general meanings: Missbrauch ('abuse/misuse'), Abh\u00e4ngigkeit ('addiction/dependence'), Vermeidung ('avoidance'), Komplex ('complex'). These are part of the more technical language that characterizes political discussions compared to literary texts. These words often occur in contexts that are not related to illness. This is also reflected in the overall annotation patterns: In the Transcript Corpus data, 59% of all sentences with full agreement were annotated as not being about illness while this is only true for 40% of sentences from the Fiction Corpus. Beyond that, the top ten include two very specific disease terms, Aids ('aids') and Krebs ('cancer') that will hardly cause disagreement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 346, |
|
"end": 353, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The sentences from the Fiction Corpus, on the other hand, are more concrete. Novels tell the story of a character or a small group of characters, depicting the inner life of these characters. To this end, these texts tend to describe mental states that cannot be clearly classified as symptoms of illness. Furthermore, novels also include words that refer to (mostly) minor symptoms like Husten ('cough'), whose status as an illness is debatable and which would usually not be discussed in parliament.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our study, we tested two ways of capturing and modeling ambiguity in texts: By asking many annotators for their judgment and by asking the annotators for a meta annotation about their annotation certainty. We found that low annotator certainty and high variation in judgments are highly correlated. However, many data points that individual annotators were certain about did display variation. In addition, the correlation need not be given for every annotator or even any annotator individually. We conclude that multiple annotations are a useful means to identify and document ambiguity in texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Disagreement between our annotators was caused by 1) different concepts of what qualifies as illness (mental phenomena and very short-lived states are controversial), 2) grammatical phenomena like negation, and 3) stylistic properties of the text such as metaphors. Especially in the Fiction Corpus, the information conveyed by the texts can also be ambiguous because the narrative is imprecise, vague or character-driven, as is typical of literary narratives. In order to further differentiate the causes of disagreement and annotator uncertainty, a more comprehensive study would be necessary that gives the annotators more space to give reasons for why they think a sentence is (not) about illness. Some of the causes of disagreement could easily be avoided by annotation guidelines. We decided against the use of guidelines because the aim of our study was to explore the whole range of possible views on illness in our data. This range could be used for inductively specifying categories for specific guidelines. In addition, if the research objective allowed for a clear position on whether mental phenomena are supposed to be annotated as illness or whether negated mentions of illness are supposed to be annotated, guidelines clarifying these points are highly recommended. However, beyond the definitions we can derive inductively or from a research question there will most likely be space for individual interpretation. We would like to encourage researchers to regard this variation not as a problem to be fixed but something that can be incorporated into our modeling of the world and in our analyses.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The semantic field as a tool to search for specific topics is ambivalent: While the absence of a semantic field word was a good indicator that the sentence is not about illness, the presence of a semantic field word only resulted in a 50% chance of the sentence being about illness. Based on our annotations we can derive a weighted semantic field that can, for instance, be filtered to get a core semantic field. This allows for a reduction of hits to only those sentences that most people consider to be about illness. However, this would systematically exclude phenomena from the data set, as especially psychological phenomena led to disagreements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "With respect to text types, our study revealed more ambiguous instances in the Fiction Corpus than in the Transcript Corpus. While one might consider metaphors to be the cause of ambiguity in literary text, this is not what the inspection of sentences with low agreement indicates. Instead, the transcripts are characterized by many abstract terms like Abh\u00e4ngigkeiten ('dependencies, addictions') which are mostly used in a way that is clearly unrelated to illness and do not cause any disagreement. The Fiction Corpus, on the other hand, names many minor symptoms (Husten, 'cough') as everyday situations are described and can also report the inner life of characters.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In the future, we plan to explore possibilities for training machine learning models on the data presented here. By showing how ambiguity levels can be represented by multiple annotations, we hope to prepare for the creation of a complex gold standard that incorporates conflicting evidence (Reidsma and op den Akker, 2008; Passonneau and Carpenter, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 323, |
|
"text": "(Reidsma and op den Akker, 2008;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 355, |
|
"text": "Passonneau and Carpenter, 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Future Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We published the annotations as a Zenodo dataset. See https://doi.org/10.5281/zenodo.4088446.2 We extended the original list of 586 hyponyms with inflectional variants using the SMOR(Schmid et al., 2004) derivate Zmorge(Sennrich and Kunz, 2014), resulting in a list of 2,026 wordforms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As implemented in the R package 'irr' (https://cran.r-project.org/web/packages/irr/irr.pdf). 4 This is similarly captured by the very common measure of entropy, however, we consider a linear measure more appropriate for our interpretation.5 This effect is robust even if all items with an agreement of 1 are excluded (r = 0.57).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work on this paper was funded by the Landesforschungsf\u00f6rderung Hamburg (LFF-FV 35) in the context of the project hermA (Gaidys et al., 2017) at Universit\u00e4t Hamburg and Hamburg University of Technology. We thank Piklu Gupta and Carla S\u00f6kefeld for proofreading. All remaining errors are our own.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 144, |
|
"text": "(Gaidys et al., 2017)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Evaluation of a Semantic Field-Based Approach to Identifying Text Sections about Specific Topics", |
|
"authors": [ |
|
{ |
|
"first": "Melanie", |
|
"middle": [], |
|
"last": "Benedikt Adelmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anke", |
|
"middle": [], |
|
"last": "Andresen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lina", |
|
"middle": [], |
|
"last": "Begerow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelyn", |
|
"middle": [], |
|
"last": "Franken", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vauth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "DH 2019. Book of Abstracts", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benedikt Adelmann, Melanie Andresen, Anke Begerow, Lina Franken, Evelyn Gius, and Michael Vauth. 2019. Evaluation of a Semantic Field-Based Approach to Identifying Text Sections about Specific Topics. In DH 2019. Book of Abstracts.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Rethinking the agreement in human evaluation tasks", |
|
"authors": [ |
|
{ |
|
"first": "Jacopo", |
|
"middle": [], |
|
"last": "Amidei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Piwek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alistair", |
|
"middle": [], |
|
"last": "Willis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3318--3329", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Rethinking the agreement in human evaluation tasks. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3318-3329, Santa Fe, New Mexico, USA, August. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Inter-Coder Agreement for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "4", |
|
"pages": "555--596", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, 34(4):555-596, September.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Inter-annotator agreement", |
|
"authors": [ |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Handbook of Linguistic Annotation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "297--313", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ron Artstein. 2017. Inter-annotator agreement. In Nancy Ide and James Pustejovsky, editors, Handbook of Linguistic Annotation, pages 297-313. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Analyzing Disagreements", |
|
"authors": [ |
|
{ |
|
"first": "Eyal", |
|
"middle": [], |
|
"last": "Beata Beigman Klebanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Beigman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Diermeier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Coling 2008: Proceedings of the Workshop on Human Judgements in Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing Disagreements. In Coling 2008: Proceedings of the Workshop on Human Judgements in Computational Linguistics, pages 2-7, Manch- ester, UK, August. Coling 2008 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "hermA: Automated modelling of hermeneutic processes", |
|
"authors": [ |
|
{ |
|
"first": "Uta", |
|
"middle": [], |
|
"last": "Gaidys", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Evelyn", |
|
"middle": [], |
|
"last": "Gius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Margarete", |
|
"middle": [], |
|
"last": "Jarchow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gertraud", |
|
"middle": [], |
|
"last": "Koch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Menzel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Hamburger Journal f\u00fcr Kulturanthropologie", |
|
"volume": "", |
|
"issue": "7", |
|
"pages": "119--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uta Gaidys, Evelyn Gius, Margarete Jarchow, Gertraud Koch, Wolfgang Menzel, Dominik Orth, and Heike Zins- meister. 2017. hermA: Automated modelling of hermeneutic processes. Hamburger Journal f\u00fcr Kulturanthro- pologie, (7):119-123.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "The Hermeneutic Profit of Annotation. On preventing and fostering disagreement in literary text analysis", |
|
"authors": [ |
|
{ |
|
"first": "Evelyn", |
|
"middle": [], |
|
"last": "Gius", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janina", |
|
"middle": [], |
|
"last": "Jacke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "International Journal of Humanities and Arts Computing", |
|
"volume": "11", |
|
"issue": "2", |
|
"pages": "233--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evelyn Gius and Janina Jacke. 2017. The Hermeneutic Profit of Annotation. On preventing and fostering dis- agreement in literary text analysis. International Journal of Humanities and Arts Computing, 11(2):233-254.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "A Tale of Two Cultures: Bringing Literary Analysis and Computational Linguistics Together", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Hammond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Workshop on Computational Linguistics for Literature", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Hammond, Julian Brooke, and Graeme Hirst. 2013. A Tale of Two Cultures: Bringing Literary Analysis and Computational Linguistics Together. In Proceedings of the Workshop on Computational Linguistics for Literature, pages 1-8. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "GermaNet -a Lexical-Semantic Net for German", |
|
"authors": [ |
|
{ |
|
"first": "Birgit", |
|
"middle": [], |
|
"last": "Hamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Feldweg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Birgit Hamp and Helmut Feldweg. 1997. GermaNet -a Lexical-Semantic Net for German. In Automatic Infor- mation Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 9-15.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "GernEdiT -The GermaNet Editing Tool", |
|
"authors": [ |
|
{ |
|
"first": "Verena", |
|
"middle": [], |
|
"last": "Henrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erhard", |
|
"middle": [], |
|
"last": "Hinrichs", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2228--2235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Verena Henrich and Erhard Hinrichs. 2010. GernEdiT -The GermaNet Editing Tool. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010), pages 2228-2235, Valletta, Malta.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Content Analysis: An Introduction to Its Methodology. Number 5 in The Sage Commtext Series", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Krippendorff. 1980. Content Analysis: An Introduction to Its Methodology. Number 5 in The Sage Commtext Series. Sage, Beverly Hills, California.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Content Analysis: An Introduction to Its Methodology", |
|
"authors": [ |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Krippendorff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Klaus Krippendorff. 2013. Content Analysis: An Introduction to Its Methodology. Sage, Los Angeles, third edition.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "What Categories Reveal about the Mind", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Lakoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Lakoff. 1987. Women, Fire, and Dangerous Things. What Categories Reveal about the Mind. The University of Chicago Press, Chicago, London.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The measurement of observer agreement for categorical data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Landis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gary", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Koch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Biometrics", |
|
"volume": "33", |
|
"issue": "1", |
|
"pages": "159--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Bio- metrics, 33(1):159-174.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semantic fields and lexical structure", |
|
"authors": [ |
|
{ |
|
"first": "Adrienne", |
|
"middle": [], |
|
"last": "Lehrer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adrienne Lehrer. 1974. Semantic fields and lexical structure. North-Holland Publishing Company, Amsterdam.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "The Subjectivity of Lexical Cohesion in Text", |
|
"authors": [ |
|
{ |
|
"first": "Jane", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Graeme", |
|
"middle": [], |
|
"last": "Hirst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "AAAI Spring Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jane Morris and Graeme Hirst. 2004. The Subjectivity of Lexical Cohesion in Text. AAAI Spring Symposium - Technical Report, 20.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Individual differences in the interpretation of text: Implications for information science", |
|
"authors": [ |
|
{ |
|
"first": "Jane", |
|
"middle": [], |
|
"last": "Morris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of the American Society for Information Science and Technology", |
|
"volume": "61", |
|
"issue": "1", |
|
"pages": "141--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jane Morris. 2010. Individual differences in the interpretation of text: Implications for information science. Journal of the American Society for Information Science and Technology, 61(1):141-149.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Annotators' Certainty and Disagreements in Coreference and Bridging Annotation in Prague Dependency Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Nedoluzhko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u00ed", |
|
"middle": [], |
|
"last": "M\u00edrovsk\u00fd", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Second International Conference on Dependency Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna Nedoluzhko and Ji\u0159\u00ed M\u00edrovsk\u00fd. 2013. Annotators' Certainty and Disagreements in Coreference and Bridg- ing Annotation in Prague Dependency Treebank. In Proceedings of the Second International Conference on Dependency Linguistics (DepLing 2013), pages 236-243.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The Benefits of a Model of Annotation", |
|
"authors": [ |
|
{ |
|
"first": "Rebecca", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bob", |
|
"middle": [], |
|
"last": "Carpenter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "311--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rebecca J. Passonneau and Bob Carpenter. 2014. The Benefits of a Model of Annotation. Transactions of the Association for Computational Linguistics, 2:311-326, December.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Inherent disagreements in human textual inferences", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "677--694", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677-694.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Learning part-of-speech taggers with inter-annotator agreement loss", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dirk", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "742--751", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agree- ment loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 742-751.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The Reliability of Anaphoric Annotation, Reconsidered: Taking Ambiguity into Account", |
|
"authors": [ |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Artstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massimo Poesio and Ron Artstein. 2005. The Reliability of Anaphoric Annotation, Reconsidered: Taking Am- biguity into Account. In Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky, pages 76-83, Ann Arbor, Michigan, June.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Rethinking validity and reliability in content analysis", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Potter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [], |
|
"last": "Levine-Donnerstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Journal of Applied Communication Research", |
|
"volume": "27", |
|
"issue": "3", |
|
"pages": "258--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. James Potter and Deborah Levine-Donnerstein. 1999. Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3):258-284, August.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Exploiting 'subjective' annotations", |
|
"authors": [ |
|
{ |
|
"first": "Dennis", |
|
"middle": [], |
|
"last": "Reidsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rieks", |
|
"middle": [], |
|
"last": "Op Den Akker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dennis Reidsma and Rieks op den Akker. 2008. Exploiting 'subjective' annotations. In Coling 2008: Proceedings of the workshop on Human Judgements in Computational Linguistics, pages 8-16, Manchester, UK, August. Coling 2008 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Filling in the Blanks in Understanding Discourse Adverbials: Consistency, Conflict, and Context-Dependence in a Crowdsourced Elicitation Task", |
|
"authors": [ |
|
{ |
|
"first": "Hannah", |
|
"middle": [], |
|
"last": "Rohde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Dickinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Louis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Webber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th Linguistic Annotation Workshop Held in Conjunction with ACL 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hannah Rohde, Anna Dickinson, Nathan Schneider, Christopher N. L. Clark, Annie Louis, and Bonnie Web- ber. 2016. Filling in the Blanks in Understanding Discourse Adverbials: Consistency, Conflict, and Context- Dependence in a Crowdsourced Elicitation Task. In Proceedings of the 10th Linguistic Annotation Workshop Held in Conjunction with ACL 2016 (LAW-X 2016), pages 49-58, Berlin, Germany, August.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "SMOR: A German computational morphology covering derivation, composition and inflection", |
|
"authors": [ |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arne", |
|
"middle": [], |
|
"last": "Fitschen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrich", |
|
"middle": [], |
|
"last": "Heid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Helmut Schmid, Arne Fitschen, and Ulrich Heid. 2004. SMOR: A German computational morphology cov- ering derivation, composition and inflection. In Proceedings of the Fourth International Conference on Lan- guage Resources and Evaluation (LREC'04), Lisbon, Portugal, May. European Language Resources Associa- tion (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task", |
|
"authors": [ |
|
{ |
|
"first": "Merel", |
|
"middle": [], |
|
"last": "Scholman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 11th Linguistic Annotation Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Merel Scholman and Vera Demberg. 2017. Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task. In Proceedings of the 11th Linguistic Annotation Workshop, pages 24-33, Valencia, Spain, April. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Zmorge: A German morphological lexicon extracted from Wiktionary", |
|
"authors": [ |
|
{ |
|
"first": "Rico", |
|
"middle": [], |
|
"last": "Sennrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beat", |
|
"middle": [], |
|
"last": "Kunz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1063--1067", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rico Sennrich and Beat Kunz. 2014. Zmorge: A German morphological lexicon extracted from Wiktionary. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1063-1067, Reykjavik, Iceland, May. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "The theory of semantic fields: a survey", |
|
"authors": [ |
|
{ |
|
"first": "Leonid", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Vassilyev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1974, |
|
"venue": "Linguistics", |
|
"volume": "12", |
|
"issue": "137", |
|
"pages": "79--94", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leonid M. Vassilyev. 1974. The theory of semantic fields: a survey. Linguistics, 12(137):79-94.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Disagreement Dissected: Vagueness as a Source of Ambiguity in Nominal (Co-)Reference", |
|
"authors": [ |
|
{ |
|
"first": "Yannick", |
|
"middle": [], |
|
"last": "Versley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Ambiguity in Anaphora Workshop (ESSLLI 2006)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "83--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yannick Versley. 2006. Disagreement Dissected: Vagueness as a Source of Ambiguity in Nominal (Co-)Reference. In Proceedings of the Ambiguity in Anaphora Workshop (ESSLLI 2006), pages 83-89.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Our 77 annotators, sorted by the number of annotated items (crowdworkers in orange, students in blue) (a) Distribution of sentences by proportion of annotators that voted for 'illness' (n=980) (b) Distribution of sentences by proportion of annotators that indicated to be 'very certain' (n=980) Response distributions (blue bars: semantic field word, orange bars: not a semantic field word).", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Distribution of agreement and annotation certainty for sentences from the Fiction Corpus (n=480) and the Transcript Corpus (n=480) in contrast", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"text": "Most common semantic field terms per corpus (ambiguous terms are marked in the translation)", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |