ACL-OCL / Base_JSON /prefixL /json /lrec /2020.lrec-1.104.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:50:56.546427Z"
},
"title": "RiQuA: A Corpus of Rich Quotation Annotation for English Literary Text",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Papay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Institute for Natural Language Processing",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Institute for Natural Language Processing",
"location": {
"addrLine": "Pfaffenwaldring 5b",
"postCode": "70569",
"settlement": "Stuttgart",
"country": "Germany"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce RiQuA (RIch QUotation Annotations), a corpus that provides quotations, including their interpersonal structure (speakers and addressees) for English literary text. The corpus comprises 11 works of 19 th-century literature that were manually doubly annotated for direct and indirect quotations. For each quotation, its span, speaker, addressee, and cue are identified (if present). This provides a rich view of dialogue structures not available from other available corpora. We detail the process of creating this dataset, discuss the annotation guidelines, and analyze the resulting corpus in terms of inter-annotator agreement and its properties. RiQuA, along with its annotations guidelines and associated scripts, are publicly available for use, modification, and experimentation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce RiQuA (RIch QUotation Annotations), a corpus that provides quotations, including their interpersonal structure (speakers and addressees) for English literary text. The corpus comprises 11 works of 19 th-century literature that were manually doubly annotated for direct and indirect quotations. For each quotation, its span, speaker, addressee, and cue are identified (if present). This provides a rich view of dialogue structures not available from other available corpora. We detail the process of creating this dataset, discuss the annotation guidelines, and analyze the resulting corpus in terms of inter-annotator agreement and its properties. RiQuA, along with its annotations guidelines and associated scripts, are publicly available for use, modification, and experimentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In literature, spoken interactions between characters are often of central importance to the narrative. Reported speech, or quotations, provide readers a direct window into the world of the narrative, where the author's own presence within the work becomes minimally intrusive (Page, 1988) . Authors can use quoted speech for storytelling in order to describe complex verbal interactions between characters without explicit exposition. In dialogue-heavy works, quoted speech can often exceed 50% of a novel's text . Therefore, quotations are an important structural component of literary texts. When analyzing the information contained in quotations, both content and context are of fundamental importance. A quotation conveys information about what was said, how it was said, who said it, and to whom. Of these data, only the first one is generally provided by a quotation's content, while broader context is needed for the other three. As a concrete example, take the following sentence from Charles Dickens' A Christmas Carol:",
"cite_spans": [
{
"start": 277,
"end": 289,
"text": "(Page, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "[\"What's to-day?\"] direct quotation [cried] Cue [Scrooge] Speaker , calling downward to [a boy in Sunday clothes] Addressee , who perhaps had loitered in to look about him.",
"cite_spans": [
{
"start": 36,
"end": 43,
"text": "[cried]",
"ref_id": null
},
{
"start": 48,
"end": 57,
"text": "[Scrooge]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The quote itself ([\"What's to-day?\"] direct quotation ) provides the content of the utterance, and the type of the quotation (direct speech, as opposed to indirect speech) tells us that this is a verbatim rendering of the utterance. However, the quotation tells us relatively little about the interpersonal structure. The reader requires knowledge of the full sentence to understand which character is speaking ([Scrooge] Speaker ), whom he is addressing ([a boy in Sunday clothes] Addressee ), and the manner of speech (he [cried] Cue ), indicating anxiety. Often, even a full sentence is not enough, and the interpersonal structure must be reconstructed from the larger context. There exist a substantial number of studies which have shown that knowledge about quotations can be useful in tasks such as extracting social networks , modeling discourse structure (Redeker and Egg, 2006) , formality (Faruqui and Pad\u00f3, 2012) , affect (Nalisnick and Baird, 2013; Iosif and Mishra, 2014) , or even plain co-reference resolution (Almeida et al., 2014) . 1 In many of these tasks, the interpersonal structure is as important as the content level, or arguably more so. This mirrors a general tendency in other areas of semantics towards structured (relational) annotations that link states and events with the relevant actors and entities. For example, in sentiment annotation, additional structure can be annotated by linking subjective phrases with relevant aspects (Liu, 2012) and in emotion annotation, emotion phrases in text can be linked with experiencers and causes (Kim and Klinger, 2018) . At the same time, existing corpora with quotation annotation vary greatly in whether, how, and to what extent they cover the interpersonal structure (i.e., who communicates what to whom -see Section 2. for details). In particular, we are not aware of any large-scale, publicly available corpora which mark both speakers and addressees for quotations in literary text. This resource gap makes investigation of conversation structures in text difficult. For example, , lacking full addressee information, must assume that speakers are addressing one another when their quotations occur within a window of 300 words. Such assumptions, as we will show, have many exceptions, and introduce noise into analyses of quotation structures. To address this deficiency, and provide the research community with a more complete dataset of quotations and their structure, we introduce RiQuA. This corpus consists of 11 English-language works of 19 th -century literature, manually annotated with rich quotation structure. Within each work, all quotations, direct and indirect, and identified. Furthermore, for each quotation, we identify speakers, addressees, and cue words, when these are present. This corpus is freely available to download 2 .",
"cite_spans": [
{
"start": 863,
"end": 886,
"text": "(Redeker and Egg, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 899,
"end": 923,
"text": "(Faruqui and Pad\u00f3, 2012)",
"ref_id": "BIBREF5"
},
{
"start": 933,
"end": 960,
"text": "(Nalisnick and Baird, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 961,
"end": 984,
"text": "Iosif and Mishra, 2014)",
"ref_id": "BIBREF8"
},
{
"start": 1025,
"end": 1047,
"text": "(Almeida et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 1462,
"end": 1473,
"text": "(Liu, 2012)",
"ref_id": "BIBREF11"
},
{
"start": 1568,
"end": 1591,
"text": "(Kim and Klinger, 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Compared to other phenomena, resources with quotation annotation are still relatively scarce. Those resources that do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Language Genre # Quotations Indirect Addressees Literature 3,176 PARC 3 English Newspaper 19,712 STOP English Fiction, Biography, & Newspaper 13,237 ACDS English Bible 1,245 RWG German Narrative, Magazine, & Newspaper 9,451 Table 1 : A summary of some prominent existing corpora for quotations exist exhibit large amounts of variance in their scope, conceptual assumptions, and structural assumptions (Papay and Pad\u00f3, 2019) . In this section, we describe existing datasets which fulfill a similar purpose to RiQuA and contrast their attributes. Table 1 provides a summary of existing similar corpora.",
"cite_spans": [
{
"start": 430,
"end": 452,
"text": "(Papay and Pad\u00f3, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 48,
"end": 260,
"text": "Literature 3,176 PARC 3 English Newspaper 19,712 STOP English Fiction, Biography, & Newspaper 13,237 ACDS English Bible 1,245 RWG German Narrative, Magazine, & Newspaper 9,451 Table 1",
"ref_id": "TABREF1"
},
{
"start": 574,
"end": 581,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "CQSAC. The Columbia Quoted Speech Attribution Corpus (CQSAC) was one of the first corpora to provide annotation of quotations . The corpus identifies quotation spans in literary text, and for each quotation identifies a noun phrase in the vicinity as a speaker for that quotation, if one exists. Unfortunately, this corpus only considers direct quotations, and not indirect ones. Additionally, quotation spans are detected automatically and not manually annotated, which makes them unreliable. Finally, no addressee information exists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CQSAC English",
"sec_num": null
},
{
"text": "PARC 3. PARC 3 (Pareti, 2016) provides quotation annotation for the more than 2000 news articles in the Wall Street Journal portion of the Penn Treebank. In each article, both direct and indirect quotations are annotated, together with cues and speakers. The annotation assumes that each quotation is introduced by a cue, which is reasonable for newswire, but not for literary text. Again, no addressees are provided.",
"cite_spans": [
{
"start": 15,
"end": 29,
"text": "(Pareti, 2016)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CQSAC English",
"sec_num": null
},
{
"text": "STOP. STOP is a corpus of \"speech, thought, and writing presentation\" (Semino and Short, 2004) , primarily developed to empirically illustrate the authors' taxonomy of types of language reporting. The corpus comprises English texts across three genres: fiction, biography, and newswire. The corpus marks spans which correspond to speech, thought, and writing separately, and further categorizes each such span as either direct, indirect, free-indirect, or reported. This categorization yields a product of twelve distinct span types which are annotated. Each such span is also marked with a speaker (or medium) where possible, but not at the textual level, but relative to a (manually compiled) list of relevant entities. Addressees are not marked.",
"cite_spans": [
{
"start": 70,
"end": 94,
"text": "(Semino and Short, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CQSAC English",
"sec_num": null
},
{
"text": "The annotated corpus of directed speech (Lee and Yeung, 2016) consists of the four gospels of the New Testament annotated for direct speech. For each quotation, the corpus provides speech verb (cue), speaker, and listener (addressee), when applicable. Speakers and addressees are identified as spans of text, and coreferent speakers and addressees are also marked. The main limitations of this corpus are the use of a single work, and a single type of quotation. Also, as far as we know, this corpus has not been made publicly available.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Lee and Yeung, 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACDS.",
"sec_num": null
},
{
"text": "RWG. The Redewiedergabe ('reported speech') corpus (Brunner, 2013) , henceforth RWG, is a Germanlanguage corpus for quotations in narrative text. The corpus consists of a short snippets of samples of German fiction and non-fiction text, published between 1840 and 1920. Within each text, spans are classified in the same 3 \u00d7 4 categories as in STOP. Additionally, for each identified quotation, a span of text is also identified as a speaker. No addressees are given, and the use of snippets precludes full-document analyses.",
"cite_spans": [
{
"start": 51,
"end": 66,
"text": "(Brunner, 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ACDS.",
"sec_num": null
},
{
"text": "We decided to annotate the same set of texts covered by the Columbia speech attribution corpus . These texts cover a range of 11 works of fiction by various authors, all written in the 19 th century. Our main motivation for this choice was to stay close to an established standard in quotation detection, while extending the coverage of the annotation by (a) providing fully manual annotation and (b) extending the scope to include indirect quotations as well as including addressees. In cases where used excerpts from the full texts (specifically, for the three novels), we use the same excerpts. While all annotated works are in English, one novel is a contemporary translation from the original French, and two short stories and a novella are contemporary translations from Russian. Table 2 lists the properties of the texts. The texts all contain large amounts of quoted speech, and the quotations contained are highly varied in properties such as length, directness, and the presence or absence of speakers and addressees. Section 4. presents a detailed statistical analysis of the texts and the quotations they contain.",
"cite_spans": [],
"ref_spans": [
{
"start": 786,
"end": 793,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Source Texts",
"sec_num": "3.1."
},
{
"text": "As sketched above, our goal was to create a corpus which exemplifies a wide range of types of reported speech events (direct quotations, indirect quotations), the speech events (cues) themeselves where realized linguistically, as well as the speakers and addressees (again, where appropriate). Figure 1 illustrates the annotation of a simple speech event. Each of these concepts needs to be defined for annotation to proceed reliably, as the experiences from the creation of previous corpora show (cf. Section 2.). To this end, we developed detailed annotation guidelines that combined definitions with examples for each concept and address problems and borderline cases explicitly. Below, we outline the most important definitions and side conditions that we ended up with after several revision cycles. The full guidelines will be made available together with the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Speech event. Speech events are descriptions of verbal communication within the text. Our guidelines' formal definition of a speech event is inspired by the Communication and Statement frames in frame semantics as realized in Berkeley FrameNet (Fillmore et al., 2003) . We avoid structural constraints on speech events as far as possible to capture as many of them as possible. The default case of a speech event is triggered by a verbal communication predicate, but we also include nouns ('Jacob's interruption') and multi-word expressions ('Jacob hit the nail on the head'). Further afield, we also cover types of events that imply communication with a high degree of certainty ('The butler was afraid he could be of little help').",
"cite_spans": [
{
"start": 244,
"end": 267,
"text": "(Fillmore et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "This liberal definition of speech event includes many instances of communication that do not fall into the purview of existing quotation corpora. However, structural constraints (Is there a cue? Is the speech event a single communication verb or noun?) should make it possible to identify the \"canonical\" cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Semantic and Discourse Structure. We broadly disre-gard considerations regarding the surrounding semantic and discourse structure: We annotate speech events irrespective of factuality or modality, and at all narrative levels (e.g., within other quotations). This corresponds to standard practice in semantic role annotation (Fillmore et al., 2003) .",
"cite_spans": [
{
"start": 324,
"end": 347,
"text": "(Fillmore et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Cues. The words and phrases showed in italics in the previous paragraph are examples of cues, that is, the events that express or imply speech. Note that cues are not necessary for speech events. In particular, literary renderings of dialogue can omit cues and just let quotations stand: \"How are you?\" -\"Good, and yourself?\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Quotation spans. The quotation span is defined as the linguistic unit (word, phrase, sentence) which provides the content of a speech event. In the case of direct quotations, this span will start and end with qutoation marks, and will consist of a verbatim transcription of the contents of the speech event. For indirect quotations, this span will not be surrounded by quotation marks, and will generally correspond to an indirect description of what was spoken. With multi-word cues, there may be grey areas between cue and quotation: In 'Peter rejected the claim', is 'reject the claim' an MWE cue, or is it a cue ('reject') combined with a quotation span ('claim')?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Speaker. Speakers are characters that utter quotations. Documents or communication channels ('letter','TV') do not qualify. The main problem with speakers is that they are not always realized locally. When they are not, their identification is often defeasible, as is the case with unrealized semantic roles (Fillmore, 1986) . Even when the speaker is identifiable at the level of entities, there may be multiple phrases in the text that refer to this entity, which forms a problem for annotation. Our guidelines provide heuristics to resolve such ambiguities. Importantly, we decided that speakers should always be outside the quotation, and more specifically, on the same narrative level as the speech event.",
"cite_spans": [
{
"start": 308,
"end": 324,
"text": "(Fillmore, 1986)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "Addressee. Addressees are the recipients of a speech event. They are typically even more pragmatically determined than speakers: They may be realized locally, but often are not (see Section 4.). If they are not, they can often be recovered from context. We include such cases in the annotation, but only annotate intended, not accidental, addresses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Schema and Guidelines",
"sec_num": "3.2."
},
{
"text": "As user interface for the annotation process, we used brat (Stenetorp et al., 2012), a relatively light-weight web-based Figure 2 : Complex interpersonal structure of quotation platform which supports the annotation of spans and relations between spans on running text. Crucially, it is able to sensibly visualize very long spans and many relations, a requirement where almost all other tools we tested fail (see Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 2",
"ref_id": null
},
{
"start": 413,
"end": 421,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "3.3."
},
{
"text": "The annotation took place over the course of approximately one year. All texts received two independent annotations by two native speakers (one with a linguistics background, one without) and then merged by a third, more senior annotator. Before the 10-month \"production\" phase started, a 1-month \"training phase\" served to train the annotators and refine the guidelines. During the production phase, each annotator worked approximately 400 hours, corresponding to a speed of roughly 54 sentences or 15 speech events per hour (with substantial variance across texts).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "3.3."
},
{
"text": "The final merging was carried out using a semi-automated approach: an automatic script 3 was used to resolve easilyresolvable disagreements (such as off-by-one errors and disagreements regarding punctuation), and manual annotation was used to resolve the remaining differences. 2On the other hand, there are many split quotations sharing cues, so that there is a significant discrepancy between the number of spans identified as cues, and the number of quotations with cues. On average, each cue span is associated with about 1.4 quotations. A total of 2884 text spans are identified as cues, but these correspond to only 277 distinct strings. The most common, by far, is the word \"said,\" which occurred as a cue 1199 times. The next most common cues are \"asked,\" \"cried,\" \"replied,\" \"answered,\" and \"went on.\" 170 cues occur only once. Quotation Length. We observed significant variance in quotation length. While the average quotation was about 26 tokens long, quotations of a single token were identified, and the longest quotation was nearly 1500 tokens long. Figure 3 shows the distribution of quotation lengths found in the corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 1064,
"end": 1073,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "3.3."
},
{
"text": "Speakers and Addressees. RiQuA contains 3274 spans labeled as speakers, and 2881 labeled as addressees. These include 1846 spans which act as both speakers and as addressees. 40% of all speakers and 34.5% of all addressees were pronouns, while the remainder were noun phrases or proper nouns. Figure 4 illustrates where cues, speakers, and addressees are found relative to their corresponding quotations. We find a strong bias across all relation types to favor selections which occur before the quotation in the text. This is consistent with a linear reading order of the texts, and with the instructions in our annotation guidelines. Comparing the distributions of the relation types, we find that cues are highly concentrated near their quotations (as expected). Speakers also occur mostly directly before or after the quotation, but addressees tend to occur at a larger distance to their corresponding quotations than speakers. We believe this is due to the fact that speakers are generally realized more prominently in syntactic terms (e.g., they are often subjects of communication verbs) while addressees are often realized only optionally (as prepositional objects) or must be recovered from the discourse context. Example 1 illustrates this case, where the addressee is an argument of a verb (call) different from the primary speech event (cry).",
"cite_spans": [],
"ref_spans": [
{
"start": 293,
"end": 301,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "Here, call must be understood by the reader as a further specification of cry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "Embedded Quotations. 150 of the quotations identified (2.52%) were nested quotations (quotations within quotations). An example of an embedded quotation occurs in the following passage passage from Anton Chekhov's The Steppe:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "[\"And I said to him, ['God bless your compressed air!'] Quotation \"] Quotation he brought out through his laughter, waving both hands.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "While not disallowed by our annotation guidelines, we observed no deeper quotation embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Analysis",
"sec_num": "4."
},
{
"text": "We now assess the quality of the resulting corpus. We quantify agreement with the family of F 1 measures (precision, recall, F 1 ), computed between sets of spans in an 'exact match' setup that is often applied to sequence classification tasks like quotation detection -i.e., there is no partial credit for partial match. Note that this evaluation choice does not easily support chance correlation as is usual in classification Table 4 : Inter-annotator agreement, and agreements between each annotator and the final, merged annotations, measured as precision, recall, and F 1 . For cues, speakers, and addressees, only those cases are considered where there was agreement for the corresponding quotation span. (Artstein and Poesio, 2008) , since it is not obvious what chance agreement on spans is supposed to be. More specifically, we measure agreement between the two initial annotations, and between each initial annotation and the final (merged) annotation. In the first case, we treat one set of initial annotations as a prediction and another as a ground truth. As the F 1 measure is symmetric, the choice of which annotator is treated as the ground truth is inconsequential. In the second case, we report precision, recall, and F 1 measure, treating the merged annotation as the ground truth. Table 4 shows the results of all levels of our annotation: quotation spans, and those quotations' cues, speakers, and addressees. We only evaluate the interaction-structure levels of cues, speakers, and addressees for those speech events where there is agreement on the quotation span in the first place. We observe that quotation spans and cues are relatively easy to annotate, speakers are more challenging, and addressees are hard. This trend is not surprising -quotation spans and cues are defined mainly by surface-level, syntactic properties of the text, while speakers and addressees require more text understanding to identify correctly. These results also correlate well with the distances discussed above (cf. Figure 4 ).",
"cite_spans": [
{
"start": 711,
"end": 738,
"text": "(Artstein and Poesio, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 4",
"ref_id": null
},
{
"start": 1301,
"end": 1308,
"text": "Table 4",
"ref_id": null
},
{
"start": 2021,
"end": 2029,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Inter-Annotator Agreement",
"sec_num": "4.2."
},
{
"text": "To better understand the causes of annotator disagreement, we manually examined a sample of cases where individual annotators disagreed with the merged gold standard, and classified these samples. We examined 100 such cases each for disagreements in quotation spans, speakers, cues, and addressees -50 for each annotator. Our findings are summarized in Table 5 . For quotation spans and cues, we observe that false negatives are much more common than false positives. This indicates that, as is often the case in manual linguistic analysis, it is challenging for annotators for stay alert and catch every instance of a given phenomenon. While this trend is already somewhat apparent in our precision and recall results, it is brought out much more clearly in the manual analysis. This is likely due to the fact that, when manually classifying mismatches, we can distinguish partial matches from \"real\" mismatches, while the automatic evaluation would treat such a partial match as a combination of a false positive and a false negative. Partial matches also account for about one third of all disagreements, indicating that the guidelines regarding the choice of boundaries when annotating quotation spans still leave too much room for interpretation. Finally, we found a small number of cases where the original annotations matched the guidelines better than the merged ones, i.e., merging errors. For speakers and addressees, a new class of potential errors arise: not only do annotators have to identify their spans correctly, but they are also supposed to pick the same men-tion from the document. To assess the difficulty of this task, we added a new error category, CoRef, which indicates that the that the annotator picked a different span of text which nevertheless referred to the same referent as the gold standard span. The following example, with the quotation and gold-standard speaker span marked, illustrates the difference between a CoRef error and a plain error (Err) for speaker annotation:",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Annotator Errors",
"sec_num": "4.3."
},
{
"text": "[He] Err felt [the Spirit] CoRef 's glance, and stopped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator Errors",
"sec_num": "4.3."
},
{
"text": "[\"What is the matter?\"] Quotation asked [the Ghost] Speaker .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotator Errors",
"sec_num": "4.3."
},
{
"text": "Indeed, we find that CoRef cases constitute a majority of the errors for addressees, and a significant minority of the errors for speakers. This means that annotators were actually rather successful in identifying the proper character in the narrative, but that the guidelines were not specific enough to enforce agreement on the same span. These observations also provide a new angle on the agreement results in Table 4 : it is not necessarily the case that addressees are conceptually more difficult to annotate than speakers: In fact, the speaker annotation shows more plain errors (46 vs. 33). Rather, for addressees it is specifically harder to pick the right mention of the reference -which is not surprising given the observations about the distances between quotation span and addressee from above (Section 4.1.). In a scenario in which a more comprehensive computational analysis of literary texts, coreference resolution for characters is available, the choice of exact mention becomes irrelevant, because all mentions are mapped onto the entity. In this case, the category of CoRef errors vanishes, and surprisingly, addressee annotation might become even simpler than speaker annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotator Errors",
"sec_num": "4.3."
},
{
"text": "In this paper, we introduced the RiQuA corpus which provides around 6,000 quotations, including rich interpersonal structure, for 11 literary 19 th century texts. The corpus provides gold annotations, resulting from full double annotation, for quotation spans (both direct and indirect) with cues, speakers and addressees, filling an important resource gap for quotation analysis. In the future, we plan to augment the structure of our corpus by adding coreference information for speakers and addressees, keeping in mind the challenges of the genre (Roesiger et al., 2018) . We hope that our corpus will prove to be a valuable resource for future work in quotation analysis, social network extraction, and discourse structure modeling.",
"cite_spans": [
{
"start": 550,
"end": 573,
"text": "(Roesiger et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5."
},
{
"text": "We focus here on literary texts, leaving aside the role of reported speech in opinion mining on factual texts(Liu, 2012).2 https://www.ims.uni-stuttgart.de/en/ research/resources/corpora/riqua",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Helen Vernon and Geoff Rahe, our annotators. We acknowledge funding from Deutsche Forschungsgemeinschaft (project PA 1956/4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "6."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A joint model for quotation attribution and coreference resolution",
"authors": [
{
"first": "M",
"middle": [
"S C"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "M",
"middle": [
"B"
],
"last": "Almeida",
"suffix": ""
},
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Almeida, M. S. C., Almeida, M. B., and Martins, A. F. T. (2014). A joint model for quotation attribution and coref- erence resolution. In Proceedings of EACL, pages 39-48, Gothenburg, Sweden.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "R",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artstein, R. and Poesio, M. (2008). Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555-596.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic recognition of speech, thought, and writing representation in German narrative texts",
"authors": [
{
"first": "A",
"middle": [],
"last": "Brunner",
"suffix": ""
}
],
"year": 2013,
"venue": "Literary and Linguistic Computing",
"volume": "28",
"issue": "4",
"pages": "563--575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brunner, A. (2013). Automatic recognition of speech, thought, and writing representation in German narrative texts. Literary and Linguistic Computing, 28(4):563-575.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic attribution of quoted speech in literary narrative",
"authors": [
{
"first": "D",
"middle": [
"K"
],
"last": "Elson",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "1013--1019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elson, D. K. and McKeown, K. R. (2010). Automatic attri- bution of quoted speech in literary narrative. In Proceed- ings of AAAI, pages 1013-1019, Atlanta, GA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Extracting social networks from literary fiction",
"authors": [
{
"first": "D",
"middle": [
"K"
],
"last": "Elson",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Dames",
"suffix": ""
},
{
"first": "K",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elson, D. K., Dames, N., and McKeown, K. R. (2010). Extracting social networks from literary fiction. In Pro- ceedings of ACL, pages 138-147, Uppsala, Sweden.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Towards a model of formal and informal address in English",
"authors": [
{
"first": "M",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "623--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faruqui, M. and Pad\u00f3, S. (2012). Towards a model of formal and informal address in English. In Proceedings of EACL, pages 623-633, Avignon, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Background to FrameNet",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Petruck",
"suffix": ""
}
],
"year": 2003,
"venue": "International Journal of Lexicography",
"volume": "16",
"issue": "3",
"pages": "235--250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fillmore, C. J., Johnson, C. R., and Petruck, M. R. (2003). Background to FrameNet. International Journal of Lexi- cography, 16(3):235-250.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Pragmatically controlled zero anaphora",
"authors": [
{
"first": "C",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the Berkeley Linguistics Society",
"volume": "12",
"issue": "",
"pages": "95--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fillmore, C. J. (1986). Pragmatically controlled zero anaphora. In V Nikiforidou, et al., editors, Proceedings of the Berkeley Linguistics Society, volume 12, pages 95-107.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From speaker identification to affective analysis: a multi-step system for analyzing children's stories",
"authors": [
{
"first": "E",
"middle": [],
"last": "Iosif",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mishra",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 3rd Workshop on Computational Linguistics for Literature",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iosif, E. and Mishra, T. (2014). From speaker identification to affective analysis: a multi-step system for analyzing children's stories. In Proceedings of the 3rd Workshop on Computational Linguistics for Literature, pages 40-49, Gothenburg, Sweden.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Who feels what and why? annotation of a literature corpus with semantic roles of emotions",
"authors": [
{
"first": "E",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "1345--1359",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, E. and Klinger, R. (2018). Who feels what and why? annotation of a literature corpus with semantic roles of emotions. In Proceedings of COLING, pages 1345-1359, Santa Fe, NM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An annotated corpus of direct speech",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Yeung",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1059--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, J. and Yeung, C. Y. (2016). An annotated corpus of direct speech. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Evalua- tion, pages 1059-1063, Portoro\u017e, Slovenia.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, B. (2012). Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Character-tocharacter sentiment analysis in Shakespeare's plays",
"authors": [
{
"first": "E",
"middle": [
"T"
],
"last": "Nalisnick",
"suffix": ""
},
{
"first": "H",
"middle": [
"S"
],
"last": "Baird",
"suffix": ""
}
],
"year": 1988,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "479--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nalisnick, E. T. and Baird, H. S. (2013). Character-to- character sentiment analysis in Shakespeare's plays. In Proceedings of ACL, pages 479-483, Sofia, Bulgaria. Page, N. (1988). Speech in the English novel. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Quotation detection and classification with a corpus-agnostic model",
"authors": [
{
"first": "S",
"middle": [],
"last": "Papay",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of RANLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papay, S. and Pad\u00f3, S. (2019). Quotation detection and clas- sification with a corpus-agnostic model. In Proceedings of RANLP, Varna, Bulgaria.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Parc 3.0: A corpus of attribution relations",
"authors": [
{
"first": "S",
"middle": [],
"last": "Pareti",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "3914--3920",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pareti, S. (2016). Parc 3.0: A corpus of attribution relations. In Proceedings of LREC, pages 3914-3920, Portoro\u017e, Slovenia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Says who? On the treatment of speech attributions in discourse structure",
"authors": [
{
"first": "G",
"middle": [],
"last": "Redeker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Egg",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Constraints in Discourse",
"volume": "",
"issue": "",
"pages": "141--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Redeker, G. and Egg, M. (2006). Says who? On the treat- ment of speech attributions in discourse structure. In Pro- ceedings of the Workshop on Constraints in Discourse, pages 141-147, Maynooth, Ireland.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards coreference for literary text: Analyzing domain-specific phenomena",
"authors": [
{
"first": "I",
"middle": [],
"last": "Roesiger",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature",
"volume": "",
"issue": "",
"pages": "129--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roesiger, I., Schulz, S., and Reiter, N. (2018). Towards coreference for literary text: Analyzing domain-specific phenomena. In Proceedings of the Workshop on Com- putational Linguistics for Cultural Heritage, Social Sci- ences, Humanities and Literature, pages 129-138, Santa Fe, NM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Corpus Stylistics: Speech, Writing And Thought Presentation In A Corpus Of English Writing. Routledge Advances In Corpus Linguistics. Routledge",
"authors": [
{
"first": "E",
"middle": [],
"last": "Semino",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Short",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semino, E. and Short, M. (2004). Corpus Stylistics: Speech, Writing And Thought Presentation In A Corpus Of En- glish Writing. Routledge Advances In Corpus Linguistics. Routledge, London.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "brat: a web-based tool for NLPassisted text annotation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of EACL System Demonstrations",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stenetorp, P., Pyysalo, S., Topi\u0107, G., Ohta, T., Ananiadou, S., and Tsujii, J. (2012). brat: a web-based tool for NLP- assisted text annotation. In Proceedings of EACL System Demonstrations, pages 102-107, Avignon, France.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Distances between quotations and their corresponding cues, speakers, and addressees.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Work</td><td colspan=\"3\">Published # tokens</td></tr><tr><td>Jane Austen</td><td/><td/></tr><tr><td>Emma \u2020</td><td/><td>1815</td><td>69k</td></tr><tr><td>Charles Dickens</td><td/><td/></tr><tr><td>A Christmas Carol</td><td/><td>1843</td><td>36k</td></tr><tr><td>Gustave Flaubert</td><td/><td/></tr><tr><td>Madame Bovary* \u2020</td><td/><td>1856</td><td>139k</td></tr><tr><td>Mark Twain</td><td/><td/></tr><tr><td>The Adventures of Tom Sawyer \u2020</td><td/><td>1876</td><td>52k</td></tr><tr><td>Sir Arthur Conan Doyle</td><td/><td/></tr><tr><td>\"A Scandal in Bohemia\"</td><td/><td>1891</td><td>10k</td></tr><tr><td>\"The Red-Headed League\"</td><td/><td>1891</td><td>11k</td></tr><tr><td>\"A Case of Identity\"</td><td/><td>1891</td><td>8k</td></tr><tr><td colspan=\"2\">\"The Boscombe Valley Mystery\"</td><td>1891</td><td>11k</td></tr><tr><td>Anton Chekhov</td><td/><td/></tr><tr><td>The Steppe*</td><td/><td>1888</td><td>47k</td></tr><tr><td>\"The Black Monk\"*</td><td/><td>1894</td><td>15k</td></tr><tr><td>\"The Lady with the Dog\"*</td><td/><td>1899</td><td>8k</td></tr></table>",
"num": null,
"text": "Figure 1: An annotated speech event with a direct quotation, speaker, cue, and addressee",
"type_str": "table",
"html": null
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "The source texts used for RiQuA. * indicates that the English text is a translation, and \u2020 indicates that excerpts were used.",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table/>",
"num": null,
"text": "",
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>: A sample of annotator errors for different annota-</td></tr><tr><td>tion levels, classified manually. Quotation spans and cues</td></tr><tr><td>are classified as either false positives (FP), false negatives</td></tr><tr><td>(FN), partial matches (PM), or correct (Cor) (for when the</td></tr><tr><td>merged corpus was in error). Speakers and addressees are</td></tr><tr><td>classified as either Coreferent (when the annotator's selected</td></tr><tr><td>span is distinct from, but coreferent to, the final corpus),</td></tr><tr><td>plain errors (Err), partial matches (PM), or correct (Cor).</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
}
}
}
}