|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:13:22.442199Z" |
|
}, |
|
"title": "qxoRef 1.0: A coreference corpus and mention-pair baseline for coreference resolution in Conchucos Quechua", |
|
"authors": [ |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Pankratz", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Universit\u00e4t Potsdam", |
|
"location": { |
|
"postCode": "14476", |
|
"settlement": "Potsdam", |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper introduces qxoRef 1.0, the first coreference corpus to be developed for a Quechuan language, and describes a baseline mention-pair coreference resolution system developed for this corpus. The evaluation of this system will illustrate that earlier steps in the NLP pipeline, in particular syntactic parsing, should be in place before a complex task like coreference resolution can truly succeed. qxo-Ref 1.0 is freely available under a CC-BY-NC-SA 4.0 license.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper introduces qxoRef 1.0, the first coreference corpus to be developed for a Quechuan language, and describes a baseline mention-pair coreference resolution system developed for this corpus. The evaluation of this system will illustrate that earlier steps in the NLP pipeline, in particular syntactic parsing, should be in place before a complex task like coreference resolution can truly succeed. qxo-Ref 1.0 is freely available under a CC-BY-NC-SA 4.0 license.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Coreference resolution is the task of identifying and grouping the phrases in a text that refer to the same real-life object, or in other words, grouping the mentions in a text-the phrases that refer to real-life objects-together into entities: clusters which represent those real-life objects (Ng, 2010; Jurafsky and Martin, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 304, |
|
"text": "(Ng, 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 331, |
|
"text": "Jurafsky and Martin, 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Coreference resolution has been an important area of focus in NLP for the last thirty years. It is often used as one component of an NLP pipeline: it builds on information gained through tools like syntactic parsers and semantic word embeddings, yielding clusters of mentions that can be useful for further NLP tasks like question answering and sentiment analysis (Pradhan et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 364, |
|
"end": 386, |
|
"text": "(Pradhan et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To succeed at coreference resolution requires the synthesis of both linguistic and contextual (world) knowledge. Current state-of-the-art coreference systems accomplish this using deep learning (Lee et al., 2018) and are trained on large coreference corpora in majority languages like English, Chinese, and Arabic (Weischedel et al., 2011) . Although the aims of the present paper are more modest, it still makes two important contributions to the field of coreference resolution for low-resource languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 212, |
|
"text": "(Lee et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 339, |
|
"text": "(Weischedel et al., 2011)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The first contribution is qxoRef 1.0, the first coreference corpus to be developed for a Quechuan language. The name reflects the variety of Quechua that appears in the corpus, namely (Southern) Conchucos Quechua (ISO 639-3 code qxo). qxo-Ref 1.0 is freely available under a Creative Commons CC-BY-NC-SA 4.0 license. 1 The second contribution is a baseline coreference resolution system trained on this corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 318, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The term \"Quechua\" is generally used to refer to the Quechuan language family, a large group of related local varieties spoken widely in South America (Adelaar and Muysken, 2004; S\u00e1nchez, 2010) . The number of speakers of Quechuan languages around the turn of the millennium was estimated at about eight million (Adelaar and Muysken, 2004) , so it is not a small language family. However, it contains two branches of different sizes. According to the classification of Torero (1964) , the smaller \"Quechua I\" is spoken in the Peruvian Highlands, while the much larger \"Quechua II\" is spoken throughout central and southern Peru as well as in parts of Ecuador (Adelaar and Muysken, 2004) . The two branches differ lexically, morphologically, and orthographically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 178, |
|
"text": "(Adelaar and Muysken, 2004;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 193, |
|
"text": "S\u00e1nchez, 2010)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 312, |
|
"end": 339, |
|
"text": "(Adelaar and Muysken, 2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 482, |
|
"text": "Torero (1964)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 686, |
|
"text": "(Adelaar and Muysken, 2004)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The variety of Quechua appearing in qxoRef is spoken in Conchucos, a district within the department of Ancash in the Peruvian Highlands, and it belongs to Quechua I. (An alternative division of the language family is offered by Parker 1963, who labels Quechuan varieties with A or B. In that schema, Conchucos Quechua belongs to Quechua B.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One challenge of having chosen a Quechua I variety to work with is the limited number of resources for that branch of the family tree. Quechua II, being much larger, has a handful of NLP tools already, including a toolknit developed by Rios (2015) . This paper thus presents an exploratory illustration of how to develop a coreference corpus and baseline coreference system for a morphologically complex language in a low-resource situation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 247, |
|
"text": "Rios (2015)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most coreference corpora are created for morphologically simple languages like English, but this project shows that the standard format for modern coreference corpora (the CoNLL-2012 shared task tabular format; Pradhan et al., 2012) can also easily accommodate a morphologically complex language like Quechua.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper will first discuss the creation of qxo-Ref in Section 2, and then move on to the baseline mention-pair system developed for it in Section 3. In the evaluation of this system in Section 4, we will see the consequences of not having earlier steps of the NLP pipeline in place before constructing a coreference resolution system. While surface features may passably substitute for some parts of a deeper linguistic analysis (Durrett and Klein, 2013) and are often the only type of feature that is available in a low-resource language, we will see that the data in qxoRef would still benefit significantly from linguistic analysis before the coreference resolution step takes place.", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 456, |
|
"text": "(Durrett and Klein, 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "However, before turning to these details, a few words on Quechuan grammar are in order.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Quechuan languages can be described as agglutinative (S\u00e1nchez, 2010, 10) : words are morphologically complex, and one morpheme generally encodes a single meaning, although a handful of syncretic morphemes also exist (e.g., -shayki in (1) below).", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 72, |
|
"text": "(S\u00e1nchez, 2010, 10)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quechua Grammar", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "A relevant feature of Quechua for the coreference resolution task is the use of null arguments (S\u00e1nchez, 2010, 12) ; in other words, Quechua is a pro-drop language. Consider the sentence in (1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 114, |
|
"text": "(S\u00e1nchez, 2010, 12)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quechua Grammar", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "(1) cuenta-ri-shayki tell-ITER-1.SUB>2.OBJ.FUT huk one cuento-ta story-ACC 'I will tell you a story.' (KP04, 2-7) 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quechua Grammar", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Nothing explicitly fills the role of subject (I) or indirect object (you) in this sentence. The suffix -shayki, like all personal reference markers on Quechua verbs, only indicates agreement and has no pronominal function (S\u00e1nchez, 2010, 21) . Ideally, we would want to include null arguments in the mention annotation, as other coreference corpora of pro-drop languages do. However, as we will see in the next section, no resources for Conchucos Quechua exist that would make this possible.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 241, |
|
"text": "(S\u00e1nchez, 2010, 21)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Quechua Grammar", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "This section presents qxoRef 1.0, a coreference corpus for Conchucos Quechua and, to the author's knowledge, the first such resource developed for a Quechuan language. The section first explores how earlier coreference corpora in other pro-drop languages are structured (Section 2.1). It then moves on to the data that qxoRef is based on (Section 2.2), how the mentions in this data were annotated (Section 2.3), and some remaining limitations of the present version of the corpus (Section 2.4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "qxoRef 1.0", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Three pro-drop languages for which coreference corpora have been developed are Czech, Spanish, and Catalan. Corpora in these languages-PCEDT 2.0 (Nedoluzhko et al., 2016) for Czech, AnCora (Recasens and Mart\u00ed, 2010) for Spanish and Catalan-incorporate null subjects by way of syntactic annotation. All sentences in the corpora receive syntactic parses, and crucially, the parser introduces nodes that correspond to the null arguments, so that those nodes can then be annotated for coreference (Recasens and Mart\u00ed, 2010, 319; Nedoluzhko et al., 2016, 173) . Unlike many other Indigenous languages, Quechua does have an NLP toolkit that includes a dependency parser (Rios, 2015) . Unfortunately, two features of this toolkit make it inapplicable to the current project. For one, it was developed for Cuzco Quechua, a Quechua II variety, and Cuzco Quechua differs enough from Conchucos Quechua (Quechua I) that significant intervention would be needed in order to apply the parser to the present data. For another, while the parser does insert dummy elements for phenomena like omitted copulas, verb ellipsis in coordinations, and internally headed relative clauses, it does not insert anything for null arguments (Rios, 2015, 62) . Thus, even if the parser were adapted for Conchucos Quechua, it would not supply the null argument nodes that would be needed for coreference annotation. We are therefore forced to rely on the information already provided in the data. We turn to this next.", |
|
"cite_spans": [ |
|
{ |
|
"start": 145, |
|
"end": 170, |
|
"text": "(Nedoluzhko et al., 2016)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 215, |
|
"text": "(Recasens and Mart\u00ed, 2010)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 493, |
|
"end": 524, |
|
"text": "(Recasens and Mart\u00ed, 2010, 319;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 554, |
|
"text": "Nedoluzhko et al., 2016, 173)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 664, |
|
"end": 676, |
|
"text": "(Rios, 2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1211, |
|
"end": 1227, |
|
"text": "(Rios, 2015, 62)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Coreference corpora for pro-drop languages", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The data in qxoRef consists of transcribed recordings of stories told by native Quechua speakers in Huari, Peru in 2015 (Bendez\u00fa Araujo et al., 2019) . The recordings are a subset of a larger audio cor-Orthography huk runa oshqu \u00f1awiwan tinkuskiyaan Segmentation huk runa oshqu \u00f1awi-wan tinku-ski-yaa-n Glosses (Sp.) uno persona azul ojo-INST encontrar-ITER-PL-3 Glosses (En.) one person blue eye-INST find-ITER-PL-3 Translation (Sp.) se encuentra con una persona de ojos azules Translation (En.) he meets a person with blue eyes Table 1 : A representation of the data's original multi-tier annotation format pus of Quechua speakers participating in various experimental tasks. 3 The chosen subset consists of the \"cuento\" task, which mimics the children's game \"telephone\": the experimenters first told the Quechua speakers an invented story, and the speakers were recorded while recounting this story to one another in pairs. The \"cuento\" task was chosen because the format of a story, with repeated references to recurring entities, provides the most suitable data for coreference resolution. qxoRef contains the stories told by twelve participants, resulting in twelve documents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 149, |
|
"text": "(Bendez\u00fa Araujo et al., 2019)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 678, |
|
"end": 679, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 530, |
|
"end": 537, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The data", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The contents of the stories are somewhat surreal: one focuses on a healer's journey to search for medicinal plants, and the other is about a corpse's encounter with two woodpeckers. The unusual content is due to the goals of the original research project. The project studied Quechua prosody and phonology, so the stories were built around words chosen for their metrical properties in Quechua. English translations of each of these stories are given in Appendix A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The data", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As Table 1 illustrates, the documents in their original forms consist of a transcription of the audio data, morphological segmentation and glossing, and translations into English and Spanish. The transcriptions, morphological segmentation and glossing, and translations into Spanish were done by hand by Quechua speakers in Huaraz and Lima, Peru. Further postprocessing, including normalising the orthography, unifying the morphological analyses and glosses, and translating into English, was done by the original researchers. The documents in this corpus are provided as .eaf files that can be processed using the annotation software ELAN (Sloetjes and Wittenburg, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 640, |
|
"end": 671, |
|
"text": "(Sloetjes and Wittenburg, 2008)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The data", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Before converting these files to the standard CoNLL-2012 shared task format (Pradhan et al., 2012), problematic artefacts of speech data (filled pauses within noun phrases, false starts, and utterances marked as unintelligible) were removed. The stems were also POS-tagged, the sentences divided, and the (non-null) mentions manually annotated by the author. The mention annotation will be the focus of the next section. Table 2 gives the number of words, morphemes, and mentions in each of the documents in qxoRef 1.0, as well as the story that each document contains, and Table 3 shows the same phrase from Table 1 in the CoNLL format. The CoNLL-U guidelines 4 define how morphologically complex units can be split into smaller sub-word elements. The indexing of these elements is done by sub-word unit, with morphologically complex elements indexed with the integer range of the elements they contain. And as Table 3 illustrates, the gloss of each morpheme is always attached to that morpheme, rather than to the stem, for clarity and for easier access to individual tags.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 421, |
|
"end": 428, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 581, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 919, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The data", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The mentions in qxoRef 1.0 belong to two classes: nouns and pronouns. The nominal mentions involve nouns that may or may not host case endings, that stand alone or next to other nouns, that are preceded by numerals or demonstratives, or that belong to complex phrases with modifying elements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mentions in qxoRef", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Two types of pronouns appear in qxoRef: personal pronouns and demonstrative pronouns. Personal pronouns are rare, since they are generally dropped; in fact, in all of qxoRef, there is only one instance each of the first and third person pronouns, nuqa and pay respectively, and a handful more of the second person, qam.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mentions in qxoRef", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "There are two types of demonstrative pronouns: proximal kay and distal tsay. Tsay is a multifunctional element: it may be used as a determiner, and it can also act as a deictic element in space and time Doc. ID Story Wd. Morph. Ment. as in tsay-chaw 'there' (lit. DEM.DIST-LOC(ative); AZ23, 55-56) and tsay-shi 'then' (lit. DEM.DIST-REP(ortative); XU31, 8-9). Occasionally it is also used as a filler in speech. Only the demonstrative pronouns that are clearly referential (identifiable by the case marking) are annotated as mentions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mentions in qxoRef", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In addition to the unambiguously referential pronouns, all nominal phrases were annotated as mentions. The mentions spanned all morphemes contained in those phrases so that the classifiers could potentially use the case and number information to establish coreference.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mentions in qxoRef", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The annotation process was straightforward. It was possible to annotate mentions at the lexical level because Quechua has no referential sub-word elements. (The agreement marking on verbs would be the closest candidate, but as mentioned above, they are only markers and not incorporated pronouns, so they should not be considered mentions.) In any cases where a pronoun could refer to multiple available entities, the English and Spanish translations were used as a guideline for selecting the correct antecedent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mentions in qxoRef", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "One limitation of the present version of the corpus has already been discussed: since the data has not been syntactically parsed to produce slots in the sentences where the null arguments would be, those arguments are not annotated as mentions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations of qxoRef 1.0", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The second limitation also concerns the mention annotation. Since the project was fairly limited in scope, the annotation was done only by the author. Annotating only nouns and pronouns does not involve as many degrees of freedom as the annotation of a larger corpus like OntoNotes, which contains many classes of coreference (cf. Pradhan et al., 2012) , but the mention annotation in qxoRef 1.0 is still potentially idiosyncratic. And because reliable annotation is crucial for creating robust coreference systems that can be depended on in downstream applications (Pradhan et al., 2012, 1-2), in future iterations of this corpus, multiple annotators should be involved.", |
|
"cite_spans": [ |
|
{ |
|
"start": 326, |
|
"end": 352, |
|
"text": "(cf. Pradhan et al., 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations of qxoRef 1.0", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The data in qxoRef 1.0 was used to train a baseline coreference resolution system for Conchucos Quechua. How that system was implemented will be the focus of the present section; afterward, Section 4 will discuss its performance with an illustrative error analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A mention-pair baseline for Conchucos Quechua", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The idea behind the mention-pair approach is simple: given a pair of mentions-a candidate anaphor and a candidate antecedent-a binary classifier is trained to predict whether that pair is coreferential (Ng, 2010; Jurafsky and Martin, 2020) . This method has been influential in the field of coreference resolution since the earliest days, and the motivation to apply it again here, despite the availability of modern deep-learning-based methods, is twofold. For one, binary classification is a simple task, and much less data is needed to train a binary classifier than would be required for state-ofthe-art deep learning methods. For another, training a classifier using an interpretable algorithm like a random forest (Breiman, 2001 ) can tell us which features are important for establishing coreference in the available data: helpful information for conducting an error analysis and determining how to improve the system. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 212, |
|
"text": "(Ng, 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 239, |
|
"text": "Jurafsky and Martin, 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 720, |
|
"end": 734, |
|
"text": "(Breiman, 2001", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The mention-pair approach to coreference resolution", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The coreference classifier was trained using 28 features generated for every mention pair in the training data (see Section 3.3). These features included information about each mention in the pair as well as the relationship between them. The features can be divided into three classes: string-based features, grammatical features, and discourse features. The string-based features include the Levenshtein edit distance between the two mention strings, the length of the longest common substring, whether the anaphor string contains the antecedent string and vice versa, and whether or not the anaphor is longer than the antecedent. Next, the grammatical features have to do with characteristics like the plurality of individual mentions; the type of individual mentions (whether they are nouns or pronouns); and how many stems, grammatical morphemes, and morphemes overall they share.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Finally, the discourse features include the number of sentences between the two mentions in the pair, the number of other mentions between the mention pair, and whether or not the mentions were produced by the same speaker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Further classes of features are known to be important for establishing coreference (Ng, 2010) , such as syntactic features (e.g., what role the mention plays in the sentence) and semantic features (e.g., cosine similarity between embedding representations of the head word). Here again, we feel the effects of the lack of resources. If we had a syntactic parser, we could to include syntactic features, and if we had embeddings, we could include semantic ones. 5 Nevertheless, surface features have been shown to pick up on some linguistically relevant information (Durrett and Klein, 2013) , and we will see below that the present selection does an adequate job.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 93, |
|
"text": "(Ng, 2010)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 565, |
|
"end": 590, |
|
"text": "(Durrett and Klein, 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In order to learn whether two mentions are coreferential, the classifier was trained on a dataset in which a pair of mentions is represented as an instance. In general, creating training data by simply taking all ordered pairs of mentions in a document is not recommended, because then the data will contain far more negative instances than positive instances (i.e., many more non-coreferential pairs than coreferential ones), and a skewed class distribution in the training data will lead to poorer performance on the test data (Soon et al., 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 529, |
|
"end": 548, |
|
"text": "(Soon et al., 2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Therefore, the literature proposes several different heuristics for creating training datasets for mention-pair systems. For the sake of exploration, this project used three of these heuristics to create three different training sets, train one classifier on each of these, and compare the performance of the three classifiers. Will a larger training set lead to better performance because there is simply more data, or will a more selectively-chosen set lead to better performance?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The first heuristic is the most common one in the literature, proposed by Soon et al. (2001) . This method creates training instances by pairing each mention with every preceding mention up to and including the closest coreferential one, that is, up to and including the closest true antecedent of the given anaphor. Thus, for each mention, there is one positive instance and some number of negative instances (possibly zero).", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 92, |
|
"text": "Soon et al. (2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "oped by Rios (2015) The next heuristic is an adaptation to Soon et al.'s method by Ng and Cardie (2002) . They refine this algorithm by excluding any mention pairs in which the candidate anaphor is a noun and the candidate antecedent a pronoun, because \"it is not easy for a human, much less a machine learner, to learn from a positive instance where the antecedent of a non-pronominal NP is a pronoun\" (Ng, 2010 (Ng, , 1398 . Like the method of Soon et al., this heuristic yields one positive instance and zero or more negative instances for each mention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 19, |
|
"text": "Rios (2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 83, |
|
"end": 103, |
|
"text": "Ng and Cardie (2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 412, |
|
"text": "(Ng, 2010", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 424, |
|
"text": "(Ng, , 1398", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The final heuristic was proposed by Bengtson and Roth (2008) and is more liberal than the previous two. This method simply uses all ordered pairs of mentions going back to the beginning of the document, but maintaining Ng and Cardie's stipulation that nouns not refer back to pronouns. This heuristic yields multiple negative instances and potentially multiple positive instances for each mention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 60, |
|
"text": "Bengtson and Roth (2008)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The train/test split, shown in Table 2 above, is approximately 70/30 in the number of words, morphemes, and mentions. Table 4 shows some properties of the three training datasets created from the eight training documents using the heuristics from Soon et al., Ng and Cardie, and Bengtson and Roth. The proportion of negative instances to positive ones is comparable in all three cases, but the size of the datasets ranges widely.", |
|
"cite_spans": [ |
|
{ |
|
"start": 260, |
|
"end": 297, |
|
"text": "Ng and Cardie, and Bengtson and Roth.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 38, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 118, |
|
"end": 125, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Finally, it should be noted that for all documents, singleton mentions-those referring to entities that are only mentioned once-were removed before generating both training and test sets (in line with the OntoNotes corpus, which does not annotate singletons at all).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating training data", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The mentions used in the test data are the original gold mentions (rather than, say, those proposed by a mention detection algorithm). Using gold mentions is more appropriate for a baseline, since it keeps the focus on the performance of the system, and comparing mentions that have the same boundaries also makes the evaluation more straightforward (Ng, 2010 (Ng, , 1403 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 359, |
|
"text": "(Ng, 2010", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 371, |
|
"text": "(Ng, , 1403", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating test data", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Each of the four test documents was converted into a test dataset following the method outlined by Soon et al. (2001, 528) : each mention serves as a candidate anaphor, and each candidate anaphor is paired with every mention that precedes it in the given document.", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 122, |
|
"text": "Soon et al. (2001, 528)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creating test data", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "As mentioned above, the coreference classifier used in the present system was a random forest, continuing the tradition of the widespread use of decisiontree-based systems in coreference resolution (Ng, 2010) . Random forests are ensemble learning methods that reduce error rates by taking the majority vote from many individual decision trees trained on random subsets of the data. A great strength of random forests is their interpretability: we can ascertain how important individual features are for the classification decision based, roughly speaking, on how high they appear in the decision trees used in the ensemble (cf. Breiman, 2001 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 208, |
|
"text": "(Ng, 2010)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 629, |
|
"end": 642, |
|
"text": "Breiman, 2001", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The coreference classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The random forest was implemented in Python using the machine learning library scikit-learn (Pedregosa et al., 2011) . After training, the top-ranking features for all three classifiers were both indicators of string similarity: the Levenshtein edit distance and the length of the longest common substring. This result is unsurprising, considering the kinds of mentions that were included in qxoRef 1.0: mostly nouns (88% of all mentions), a handful of pronouns (12%), and no null arguments. Thus, coreferential mentions are generally similar to one another at the level of the string. Mentions that would require grammatical or discourse-based information (pronouns and null arguments) are rare or non-existent.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 116, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The coreference classifier", |
|
"sec_num": "3.5" |
|
}, |
|
{ |
|
"text": "The final step of the coreference resolution procedure was to apply the trained classifiers to the test data to predict which mention pairs contained in those documents are coreferential. This was done using the method used in Soon et al. (2001) that was later called \"closest-first clustering\" (Ng, 2010; Jurafsky and Martin, 2020) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 245, |
|
"text": "Soon et al. (2001)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 305, |
|
"text": "(Ng, 2010;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 332, |
|
"text": "Jurafsky and Martin, 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "This algorithm iterates through the test data one anaphor at a time, looking at the pair that anaphor makes with every mention that precedes it in the document. The classifier is applied to each of these mention pairs until a positive classification occurs. Then, the algorithm skips the rest of the pairs containing the current anaphor and moves on to the next one. Importantly, if there is never a positive classification decision, then the anaphor is not classified as coreferential with anything and is ignored. This clustering algorithm was applied to predict all the mention pairs in the test documents. Then, to arrive at the representations of the entities in each document, the transitive closure of all of the predicted mention pairs was computed. The next section compares the performances of the three classifiers and analyses the errors that they made.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Clustering", |
|
"sec_num": "3.6" |
|
}, |
|
{ |
|
"text": "The evaluation of each classifier's performance used the standard three coreference metrics-MUC, B 3 , and CEAF e -as implemented in the scoring scripts from the CoNLL-2012 shared task (Pradhan et al., 2012) . The results are given in Table 5 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 207, |
|
"text": "(Pradhan et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 242, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and error analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Strikingly, although the proportion of positive to negative instances in the training data is nearly identical (see Table 4 ), the resulting classifiers performed quite differently. Even though the heuristic from Ng and Cardie (2002) produced the smallest amount of training data, it performed best-far better, in fact, than the heuristic that produces the largest amount of training data, Bengtson and Roth (2008) . By removing pronouns as antecedents, Ng and Cardie's algorithm was likely more faithful to the actual imbalanced proportion of nouns to pronouns in the data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 233, |
|
"text": "Ng and Cardie (2002)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 390, |
|
"end": 414, |
|
"text": "Bengtson and Roth (2008)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 123, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and error analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The general pattern, at least in the MUC and B 3 metrics, is high precision and low recall. In other words, when the mentions were classified as coreferential, this was generally done correctly. However, the clustering procedure often failed to identify coreference links between anaphors and their true antecedents, leading to that anaphor's omission from the final entity representations. The error analysis in the next section will explore why this might have been the case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and error analysis", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The interpretability of random forests serves us well in trying to understand the results of the evaluation. For example, we can see that, because the classifiers favoured string and morpheme similarity, they fell short when dealing with coreferential mentions whose surface forms diverge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For instance, hampi ashiq runaqa 'person searching for medicine' (TP03, 316-320) is the same person as tsay hampikuq runa 'that healer person' (TP03, 213-215), and although the strings do contain some overlap (runa 'person' and hampi 'medicine' appear in both), they are dissimilar enough that none of the classifiers recognised these two mentions as coreferential.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For the same reason, the classifiers also frequently failed to identify an antecedent for demonstrative pronouns, since often, the only commonality between the string of a demonstrative pronoun and the antecedent was the case marking (and sometimes not even that). For example, tsayqa 'that one' (ZZ24, 25-26) was not recognised by any of the classifiers as coreferential with hampikuq runa 'healer person' (ZZ24, 3-4) because the strings have very little in common.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Further, the corpus contains cataphoric constructions like tsayqa, tsay, huk runaqa 'that one, that, a person' (OA32, 147-152) in which tsayqa and huk runaqa are coreferential (and the middle tsay acts as a filled pause). None of the classifiers successfully identified the coreference there-not even the Soon et al. classifier, which was the only one to have seen pronouns as antecedents in its training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "These examples show that the classifiers all failed on certain kinds of mention pairs. But were there any systematic differences between the classifiers?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The feature importance scores of the classifiers indicated that the importance of grammatical features was, on average, higher for the Bengtson and Roth classifier than for the other two. One might therefore expect this classifier to be better at identifying coreference involving pronouns. However, this prediction is not borne out; all classifiers seemed to deal with pronouns equally poorly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In sum, the low recall is probably due to the nature of the mentions in qxoRef 1.0. The dominance of explicit nominal mentions rewarded string matching over grammatical knowledge, meaning that connections between superficially dissimilar mentions were often overlooked. If null arguments were also included, however, the classifiers would have to base their decisions on more broadly applicable grammatical features. This would be a more accurate representation of what is really involved in the coreference resolution task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Error analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "This paper introduced qxoRef 1.0, a new coreference corpus for Conchucos Quechua, and presented a mention-pair baseline for coreference resolution with this corpus that obtains an average F1 score of 68.51.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and outlook", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Several directions for future work are clear. First, the coreference corpus should be improved. A more reliable dataset should be created by having mentions annotated by multiple annotators and computing the inter-annotator agreement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and outlook", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Further, the sentences should be syntactically parsed. Not only would this allow a more sophisticated feature representation for use in the classifier, it would also allow null arguments to be annotated as mentions. This should lead to higher recall, since fewer mentions will be discarded because the coreference connections are missed. (And until a parser for Conchucos Quechua becomes available, an interim measure of introducing empty slots where the null arguments would be would already likely lead to a more robust system, even without the underlying syntactic structure.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and outlook", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Additionally, other avenues for improving the feature representations should be explored. For example, embeddings for a compatible variety of Quechua are not out of reach. Ancash Quechua is a variety that subsumes Conchucos Quechua, and a collection of texts in this variety is available on the Ancash Quechua wikimedia page. This material could be used to create sub-word embeddings, for example following the procedure laid out in Heinzerling and Strube (2018), that could then be used to encode semantic information about the mentions for use in the classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and outlook", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Overall, this project has highlighted some of the issues involved in NLP for low-resource languages. To succeed at complex NLP tasks like coreference resolution, certain steps in the text processing pipeline should already have been achieved, syntactic parsing being a prominent example. Improving the basic NLP toolkits for low-resource languages will lead to greater success on tasks like coreference resolution, which is in turn important for even more complex downstream tasks. Our focus should therefore first be on developing basic tools and extending existing ones, and then we can work upward from there.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and outlook", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "https://github.com/epankratz/qxoRef", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Examples from qxoRef will be referred to using the document identifier, here KP04, and the range of indices in that document that the example spans, here 2 to 7 (inclusive).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This corpus is provided under a CC-BY-NC-SA 4.0 licence at https://refubium.fu-berlin.de/ handle/fub188/25747, and its documentation, including details about the \"cuento\" task used in qxoRef, can be found at https://www.geisteswissenschaften. fu-berlin.de/en/we05/forschung/ drittmittelprojekte/Einzelprojekte/ DFG-projekt-zweisprachige-Prosodie/ index.html.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://universaldependencies.org/ format.html#words-tokens-and-empty-nodes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Sub-word embeddings for a Quechua II variety do exist(Heinzerling and Strube, 2018), but as with the toolkit devel-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "An encounter with woodpeckers (adapted from ZR29): \"They say a corpse met some woodpeckers. When they met, the woodpeckers were below an alder. Those woodpeckers were the children of a healer. They were eating some lice. When they met the corpse, the corpse asked the woodpeckers, 'Is there a healer here? You are the children of the healer. I believe I am sick, I want to be healed.' When he said this, the woodpeckers laughed and said, 'How will we do that for you? You want to be healed. But you are already dead.'\" The healer's journey (adapted from TP03): \"It's said that once upon a time, a healer went looking for medicine. It was already afternoon when he left, and while he was going, night came. He finished his meal: only corn and a little meat. While he walked and it got dark, he got very cold, and having nothing more to eat, he ate six flies that had come to him. When it got dark, he stayed where he was. Early the next day, he left and met a squinty-eyed [or sometimes blue-eyed -EP] man. This man was sitting on top of a chuchura plant. The healer asked the man, 'Where could I find medicinal plants?' The one sitting on the chuchura said, 'If you give me your soul, I will tell you.' The healer was clever, so he gave him the souls of the six flies instead. When he gave them to him, the other man was suspicious that he was being cheated, but he told him where to go anyway to find the medicinal plants. The healer got there quickly and laughed a lot.\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Story translations", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The Languages of the Andes. Cambridge Language Surveys", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Willem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pieter", |
|
"middle": [], |
|
"last": "Adelaar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Muysken", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Willem F. H. Adelaar and Pieter Muysken. 2004. The Languages of the Andes. Cambridge Language Sur- veys. Cambridge University Press, Cambridge/New York.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Corpora Amerikanischer Sprachen: Interaktive Sprachspiele Aus Dem Mehrsprachigen Lateinamerika (Quechua 1). Refubium, Freie Universit\u00e4t Berlin", |
|
"authors": [ |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "Ra\u00fal Bendez\u00fa Araujo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uli", |
|
"middle": [], |
|
"last": "Buchholz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Reich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ra\u00fal Bendez\u00fa Araujo, Timo Buchholz, and Uli Re- ich. 2019. Corpora Amerikanischer Sprachen: In- teraktive Sprachspiele Aus Dem Mehrsprachigen Lateinamerika (Quechua 1). Refubium, Freie Uni- versit\u00e4t Berlin, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Understanding the Value of Features for Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Bengtson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "294--303", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the Value of Features for Coreference Resolution. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 294-303, Honolulu, Hawaii. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Random forests. Machine learning", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "45", |
|
"issue": "", |
|
"pages": "5--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman. 2001. Random forests. Machine learn- ing, 45(1):5-32.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Easy Victories and Uphill Battles in Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Durrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1971--1982", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Greg Durrett and Dan Klein. 2013. Easy Victories and Uphill Battles in Coreference Resolution. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1971- 1982, Seattle, Washington, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages", |
|
"authors": [ |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Heinzerling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2989--2993", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free Pre-trained Subword Embeddings in 275 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), pages 2989-2993.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Speech and Language Processing: An Introduction to Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Computational Linguistics, and Speech Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Jurafsky and James H. Martin. 2020. Speech and Language Processing: An Introduction to Nat- ural Language Processing, Computational Linguis- tics, and Speech Recognition, 3rd ed. draft edition.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Higher-Order Coreference Resolution with Coarseto-Fine Inference", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "687--692", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-2108" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-Order Coreference Resolution with Coarse- to-Fine Inference. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Coreference in Prague Czech-English Dependency Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Nedoluzhko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [], |
|
"last": "Novak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Mikulova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji\u0159\u0131", |
|
"middle": [], |
|
"last": "M\u0131rovsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of LREC 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anna Nedoluzhko, Michal Novak, Silvie Cinkova, Marie Mikulova, and Ji\u0159\u0131 M\u0131rovsky. 2016. Coref- erence in Prague Czech-English Dependency Tree- bank. In Proceedings of LREC 2016, pages 169- 176.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Supervised Noun Phrase Coreference Research: The First Fifteen Years", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1396--1411", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng. 2010. Supervised Noun Phrase Corefer- ence Research: The First Fifteen Years. In Pro- ceedings of the 48th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1396- 1411, Uppsala, Sweden. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Improving Machine Learning Approaches to Coreference Resolution", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "104--111", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1073083.1073102" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving Ma- chine Learning Approaches to Coreference Resolu- tion. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 104-111, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Clasificacion genetica de los dialectos quechuas", |
|
"authors": [ |
|
{ |
|
"first": "Gary", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Parker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1963, |
|
"venue": "Revista del Museo Nacional", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "241--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gary J. Parker. 1963. Clasificacion genetica de los dialectos quechuas. Revista del Museo Nacional, 32:241-252.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Scikit-learn: Machine learning in Python", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Vanderplas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Passos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cournapeau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Brucher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Perrot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Duchesnay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "CoNLL-2012 Shared Task: Modeling multilingual unrestricted coreference in OntoNotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 Shared Task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "AnCora-CO: Coreferentially annotated corpora for Spanish and Catalan", |
|
"authors": [ |
|
{ |
|
"first": "Marta", |
|
"middle": [], |
|
"last": "Recasens", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ant\u00f2nia Mart\u00ed", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Language Resources and Evaluation", |
|
"volume": "44", |
|
"issue": "4", |
|
"pages": "315--345", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1007/s10579-009-9108-x" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marta Recasens and M. Ant\u00f2nia Mart\u00ed. 2010. AnCora- CO: Coreferentially annotated corpora for Spanish and Catalan. Language Resources and Evaluation, 44(4):315-345.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Basic Language Technology Toolkit for Quechua", |
|
"authors": [ |
|
{ |
|
"first": "Annette", |
|
"middle": [], |
|
"last": "Rios", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annette Rios. 2015. A Basic Language Technology Toolkit for Quechua. PhD thesis, University of Zurich.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The Morphology and Syntax of Topic and Focus: Minimalist Inquiries in the Quechua Periphery. Number v. 169 in Linguistik Aktuell/Linguistics Today (LA)", |
|
"authors": [ |
|
{ |
|
"first": "Liliana", |
|
"middle": [], |
|
"last": "S\u00e1nchez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liliana S\u00e1nchez. 2010. The Morphology and Syntax of Topic and Focus: Minimalist Inquiries in the Quechua Periphery. Number v. 169 in Linguis- tik Aktuell/Linguistics Today (LA). John Benjamins Pub. Co, Amsterdam/Philadelphia.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Annotation by category -ELAN and ISO DCR", |
|
"authors": [ |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Sloetjes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Wittenburg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 6th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Han Sloetjes and Peter Wittenburg. 2008. Annotation by category -ELAN and ISO DCR. In Proceedings of the 6th International Conference on Language Re- sources and Evaluation (LREC 2008).", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Machine Learning Approach to Coreference Resolution of Noun Phrases", |
|
"authors": [], |
|
"year": 2001, |
|
"venue": "Computational Linguistics", |
|
"volume": "27", |
|
"issue": "4", |
|
"pages": "521--544", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/089120101753342653" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A Machine Learning Ap- proach to Coreference Resolution of Noun Phrases. Computational Linguistics, 27(4):521-544.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Los dialectos quechuas. Anales Cient\u00edficos de la Universidad Agraria", |
|
"authors": [ |
|
{ |
|
"first": "Alfredo", |
|
"middle": [], |
|
"last": "Torero", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1964, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "446--478", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alfredo Torero. 1964. Los dialectos quechuas. Anales Cient\u00edficos de la Universidad Agraria, 2(4):446- 478.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "OntoNotes: A large training corpus for enhanced processing", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Belvin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Handbook of Natural Language Processing and Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Mar- cus, Martha Palmer, Robert Belvin, Sameer Prad- han, Lance Ramshaw, and Nianwen Xue. 2011. OntoNotes: A large training corpus for enhanced processing. In Joseph Olive, Caitlin Christian- son, and John McCary, editors, Handbook of Natu- ral Language Processing and Machine Translation, pages 54-63. Springer.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "The number of words, morphemes, and mentions in each document in qxoRef, along with the train/test split and which story each document contains (H: the healer's journey; W: an encounter with woodpeckers)", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "A sample sentence from qxoRef (AZ23, 138-146; 'He meets a person with blue eyes') in the CoNLL format. Note that the null arguments are not annotated; there is no mention corresponding to the third-person subject of tinkuskiyaan. (Columns: morpheme index, Quechua text, speaker ID, English translations of the stems, POS tags of stems/glosses for each morpheme, coreference annotation)", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": ", the differences between Quechua I and Quechua II make those embeddings inapplicable here.HeuristicInst. Neg. inst. Prop.", |
|
"content": "<table><tr><td>Soon et al.</td><td>1358</td><td>1214 89.4%</td></tr><tr><td>Ng & Cardie</td><td>1194</td><td>1060 88.8%</td></tr><tr><td colspan=\"2\">Bengtson & Roth 3922</td><td>3463 88.3%</td></tr></table>", |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Properties of the three training sets: the number of instances, the number of negative instances, and the proportion of negative instances", |
|
"content": "<table/>", |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>: Evaluation results for the three training data creation heuristics (SO: Soon et al.; NC: Ng & Cardie; BR:</td></tr><tr><td>Bengtson & Roth)</td></tr></table>", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |