|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:27:11.320227Z" |
|
}, |
|
"title": "Computational Interpretations of Recency for the Choice of Referring Expressions in Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Fahime", |
|
"middle": [], |
|
"last": "Same", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Cologne", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "First, we discuss the most common linguistic perspectives on the concept of recency and propose a taxonomy of recency metrics employed in Machine Learning studies for choosing the form of referring expressions in discourse context. We then report on a Multi-Layer Perceptron study and a Sequential Forward Search experiment, followed by Bayes Factor analysis of the outcomes. The results suggest that recency metrics counting paragraphs and sentences contribute to referential choice prediction more than other recency-related metrics. Based on the results of our analysis, we argue that, sensitivity to discourse structure is important for recency metrics used in determining referring expression forms.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "First, we discuss the most common linguistic perspectives on the concept of recency and propose a taxonomy of recency metrics employed in Machine Learning studies for choosing the form of referring expressions in discourse context. We then report on a Multi-Layer Perceptron study and a Sequential Forward Search experiment, followed by Bayes Factor analysis of the outcomes. The results suggest that recency metrics counting paragraphs and sentences contribute to referential choice prediction more than other recency-related metrics. Based on the results of our analysis, we argue that, sensitivity to discourse structure is important for recency metrics used in determining referring expression forms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Speakers use various linguistic forms such as pronouns, proper names, and common nouns, to refer to entities in discourse. A great number of studies have addressed the issue of referring, and the factors that play a role in speakers' choice of the form of referring expressions. These factors include grammatical function (Brennan, 1995) , animacy (Fukumura and van Gompel, 2011) , competition (Arnold and Griffin, 2007) , frequency (Ariel, 1990) and recency (McCoy and Strube, 1999; Ariel, 2001 ), among others. The focus of this article is on recency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 337, |
|
"text": "(Brennan, 1995)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 379, |
|
"text": "(Fukumura and van Gompel, 2011)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 420, |
|
"text": "(Arnold and Griffin, 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 433, |
|
"end": 446, |
|
"text": "(Ariel, 1990)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 483, |
|
"text": "(McCoy and Strube, 1999;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 484, |
|
"end": 495, |
|
"text": "Ariel, 2001", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Broadly speaking, we understand recency to be the distance between the current mention of a referent and its antecedent. Therefore, in this work, we employ recency metrics to predict the form of subsequent mentions, and are not interested in the choice of \"first-mention\" expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recency has received much attention in both linguistic and computational studies, but in many cases, the notion of recency itself has been left largely undefined even though, as we shall see, recency can be understood in different ways. This paper has three objectives. The first is to survey different computational \"interpretations\" of the notion of recency. The second goal is to determine which of these computational interpretations is most effective for predicting the form of a referring expression in discourse context. In other words, we will ask, \"what is the best way to operationalize the notion of recency in computational and data-oriented studies?\" And the final objective is to see to which extent the choice of recency metrics should depend on the corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The structure of this paper is as follows: in section 2, we summarize how recency has been used in linguistic studies. In section 3, we provide a brief overview of the notion of recency in Machine Learning (ML) studies, with the purpose of creating a taxonomy of recency metrics discussed in section 4. Sections 5 and 6 report two new studies. The former analyzes single recency metrics, the latter takes their combination into account. Finally, section 7 gives a brief summary and review of the findings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "There is a long tradition of work in linguistics considering recency as a factor influencing the salience of a referent. The general idea is that the greater the distance between the two mentions, the greater the chance of using a full noun phrase anaphor (Vonk et al., 1992; Giv\u00f3n, 1992; Arnold, 2010) ; conversely, the shorter the distance between the two mentions, the greater the chance of pronominalization. Some studies have kept the notion of recency or \"distance to the previous mention\" opaque by not defining what long and short distance mean; while others have presented different interpretations of the notion of distance. In this paper, we focus on the three most frequent interpretations that are found in the literature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 256, |
|
"end": 275, |
|
"text": "(Vonk et al., 1992;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 288, |
|
"text": "Giv\u00f3n, 1992;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 289, |
|
"end": 302, |
|
"text": "Arnold, 2010)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Different interpretations of the notion of recency/distance", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the studies where the main focus is on the pronominalization problem, the notion of distance is often concerned with whether or not the antecedent is present in the same or previous utterance (or clause) . In a corpus study, Hobbs (1978) noticed that in 98% of the cases, the antecedent of a pronoun anaphor is in the previous or in the same sentence. Ariel (1990) used the same sentence metrics in her corpus study, where she focused on the distribution of pronouns, demonstratives and full NPs. She demonstrated that with respect to distance from the antecedent, in more than 80% of cases, pronouns favor short distances, where the antecedent is in the same sentence or only one sentence away. In centering-based studies such as Hitzeman and Poesio (1998) , Poesio et al. (2004) and Henschel et al. (2000) too, long distance antecedents are those which are more than one utterance or one clause away.", |
|
"cite_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 206, |
|
"text": "(or clause)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 240, |
|
"text": "(1978)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 355, |
|
"end": 367, |
|
"text": "Ariel (1990)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 734, |
|
"end": 760, |
|
"text": "Hitzeman and Poesio (1998)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 763, |
|
"end": 783, |
|
"text": "Poesio et al. (2004)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 788, |
|
"end": 810, |
|
"text": "Henschel et al. (2000)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Immediate context", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In some other corpus-based studies, a larger span of text was taken into account. In a comprehensive work on topic continuity in discourse, Giv\u00f3n (1983) measured the distance to the previous mention up to 20 clauses back. The work by Giv\u00f3n is one of the first attempts in quantifying the role of distance in discourse. In a computational pronominalization study, McCoy and Strube (1999) hypothesized that \"when the last mention of an item is several sentences back in the text, a definite description is preferred\". For this study which was conducted on a corpus of The New York Times articles, they found out that in long-distance situations (where the antecedent is more than two sentences away), a definite description is almost always used. In a psycholinguistics experiment, Arnold et al. (2009) examined the choice of referring expressions made by high-functioning children and adolescents with autism. Arnold et al. grouped the distance to the antecedent into 4 categories and demonstrated that the participants in their experiment had sensitivity to the discourse context.", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 152, |
|
"text": "Giv\u00f3n (1983)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 386, |
|
"text": "McCoy and Strube (1999)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 800, |
|
"text": "Arnold et al. (2009)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-local context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "While the distance patterns explained in the previous paragraphs account for a large number of pronominalization cases, according to Fox (1987) , they cannot handle all various types of anaphoric patterns. She showed that pronouns can be used to refer to a referent over long stretches of distance until the goal of the narrative changes (cited in Smith (2003) ). In line with this idea, Ariel (1990) proposed the notion of unity, meaning, the antecedent being in the same frame, segment or paragraph. Vonk et al. (1992) and Tomlin (1987) also emphasized the importance of episode or unit boundaries, mostly realized as paragraph boundaries in written text, as factors contributing to the recency of mention.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 143, |
|
"text": "Fox (1987)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 360, |
|
"text": "Smith (2003)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 502, |
|
"end": 520, |
|
"text": "Vonk et al. (1992)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 538, |
|
"text": "Tomlin (1987)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "As explained, there are three different interpretations of recency in the literature. The first two interpretations are concerned with measuring the distance in sentences (or clauses), while the third one goes beyond the sentential level, and focuses on paragraphs. Which of these interpretations does best in algorithms to predict referential choice in discourse contexts?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3 Recency in ML studies Within Natural Language Generation (Gatt and Krahmer, 2018) , reference production is computationally modelled in an area known as Referring Expression Generation (REG) (Krahmer and van Deemter, 2019; van Deemter, 2016) . REG models have various shapes and forms, with feature-based ML models playing a substantial role. GREC ) was a series of Shared Task Evaluation tasks that is still regarded as a natural starting point when it comes to the generation of referring expressions in context. Different ML algorithms were submitted to these shared tasks, a number of which have exploited recency metrics. Some of the metrics used in these algorithms are pursuant to the interpretations mentioned in section 2. For example, the recency feature in Greenbacker and McCoy (2009) resembles the metric defined in McCoy and Strube (1999) . Another example is a binary feature used by Bohnet (2008) , which captures whether or not the antecedent occurs in the same sentence. This metric is similar to the interpretation discussed above under the heading \"Immediate Context\". Some of the other recency metrics used in these algorithms, however, are not in accordance with the interpretations introduced in section 2. For instance, Bohnet (2008) and Jamison and Mehay (2008) used distance metrics measuring number of words between the two mentions. In a more recent ML study, Kibrik et al. (2016) stated that referential choice belongs to a large group of multifactorial processes. They used 7 different distance-related metrics in their study and concluded that these metrics are essential for successful prediction of referential choice, but there is no indication which metrics are the most relevant ones. Further studies that include recency metrics are Ferreira et al. 2016, Modi et al. (2017) and Saha et al. (2011) , among others.", |
|
"cite_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 83, |
|
"text": "(Gatt and Krahmer, 2018)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 224, |
|
"text": "(Krahmer and van Deemter, 2019;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 243, |
|
"text": "van Deemter, 2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 798, |
|
"text": "Greenbacker and McCoy (2009)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 854, |
|
"text": "McCoy and Strube (1999)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 901, |
|
"end": 914, |
|
"text": "Bohnet (2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1246, |
|
"end": 1259, |
|
"text": "Bohnet (2008)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1264, |
|
"end": 1288, |
|
"text": "Jamison and Mehay (2008)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1794, |
|
"end": 1812, |
|
"text": "Modi et al. (2017)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 1817, |
|
"end": 1835, |
|
"text": "Saha et al. (2011)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "We saw that the metrics used in the ML studies are based on different units of measurement (e.g. word distance versus sentence distance). Likewise, different strategies are used to encode these metrics. For instance, some distances are measured in natural numbers while others are categorized in a smaller class of broader \"bins\". In the following example taken from the GREC-2.0 corpus , one could say that the distance between the expression \"its\" and its antecedent \"Berlin\" is 21 words (a natural number). Another solution would be, for instance, to follow Ferreira et al. (2016) in grouping the numerical distances into five groups consisting of 0-10 words, 11-20 words, 21-30 words, 31-40 words and more than 40 words. With this approach, the distance between \"its\" and its antecedent falls into the third bin, 21-30 words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(1) Berlin (1) is (2) the (3) capital (4) city (5) and (6) one (7) of (8) the (9) sixteen (10) federal (11) states (12) of (13) Germany (14) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 140, |
|
"text": "(14)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "(15) With (16) a (17) population (18) of (19) 3.4 (20) million (21) in (22) its (23) city (24) limits (25) ,...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The question is which of these metrics work best in ML studies. The existing diversity motivated us to collect as many recency metrics as possible from the ML literature and create a taxonomy of recency metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit boundary", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "This section begins with subsection 4.1 introducing recency metrics collected from different ML studies. Later, subsection 4.2 presents the two corpora used in our assessments and highlights their main differences. And finally, subsection 4.3 introduces the baseline algorithm and the ML method employed in our assessments. Table 1 presents the metrics measuring the distance from the current expression to its antecedent 1 . As 1 Greenbacker and McCoy defined the recency metric in their study as: \"Referring expressions which were separated mentioned in the previous section, recency metrics vary a great deal. The most important differences between these metrics are:", |
|
"cite_spans": [ |
|
{ |
|
"start": 429, |
|
"end": 430, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 331, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "I. Antecedent type In most metrics, the antecedent is the nearest previous mention of the same entity. In one of the metrics (metric 14 in Table 1 ), however, instead of the distance to the nearest mention, the distance to the nearest full NP mention is measured.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 146, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "II. Unit of measurement The units in which the distance is measured vary in the recency metrics. The units of measurements used in the metrics outlined in Table 1 include distance in number of:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 162, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 words [metrics 1-3] \u2022 sentences [metrics 4-11] \u2022 NPs [metric 12]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 markables, defined as the textual expressions, between which coreferential relations can be established (Chiarcos and Krasavina, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 136, |
|
"text": "(Chiarcos and Krasavina, 2005)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "[metrics 13-14]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 paragraphs [metric 15]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "III. Type of encoding As shown in Example (1), the major difference between encoding of the metrics is whether the distance is reported as a numeric value or defined bins. Among the metrics presented below, metrics 2, 3, 5, 6, 7 and 10 are categorical, the rest are numeric. Another difference in type of encoding concerns how numeric values are encoded. Of the metrics used in this assessment, metrics 1, 4 and 12-15 are reported as natural numbers (including 0), metric 8 is the natural logarithm of the number of intervening sentences, metric 9 is its exponential variant 2 and metric 11, which will be explained below, is the normalized distance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Scaled/normalized sentence distance The distance between the mentions ranges from 0 to 19 sentences in MSR and 0 to 146 sentences in WSJ. To overcome this sparsity, we decided to bound from the most recent reference by more than two sentences were marked as long distance references\" (2009, p. 101). We have two different interpretations of this sentence which are presented as metric 5 and metric 6. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x norm = x i \u2212 x min x max \u2212 x min", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this section, we introduced 14 metrics from the ML literature, plus one additional metric we decided to include in the study. The assessment of these metrics will be presented in section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Taxonomy of recency/distance metrics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As indicated earlier, we are also interested to find out the extent to which the choice of recency metrics should take the corpus itself into account. Corpora can be different from each other in terms of, for instance, size, genre (e.g. Wikipedia article, newspaper articles and medical reports) and structure of their documents (e.g. length and sentence structure). For this study, we have chosen two corpora which are different from each other in terms of text genre and length-related attributes (which will be referred to as text structure in this article).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpora used in this study", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Considering that the GREC Shared Tasks were among the first systematic studies tackling the referential choice in context, we decided to start our assessment of the metrics with GREC-2.0 (henceforth MSR 3 ), one of the underlying corpora of these Shared Tasks 4 . MSR consists of more than 1500 introductory sections of Wikipedia articles in 5 different classes (people, city, country, river and mountain). The major pitfall of MSR is that only mentions to the main reference of the article are annotated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpora used in this study", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In addition to MSR, we decided to include the Wall Street Journal portion (henceforth WSJ) of the OntoNotes corpus (Hovy et al., 2006; Pradhan et al., 2013) in this study. The genres of the two corpora are different, with the former containing Wikipedia articles, and the latter having newspaper articles. Also, the structure of the documents, such as length of each document, number of sentences and number of paragraphs are radically different across both corpora. The existing differences between the two corpora make it possible to explore whether the choice of recency metrics should depend on the text structure. Table 2 illustrates the major differences between the two corpora. In order to apply the recency metrics to MSR, we conducted tokenization and sentence segmentation using the spaCy python library. The texts of WSJ were already segmented and tokenized.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 134, |
|
"text": "(Hovy et al., 2006;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 135, |
|
"end": 156, |
|
"text": "Pradhan et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 619, |
|
"end": 626, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpora used in this study", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It is also important to note that four referring expression types, namely common noun, proper name, pronoun and zero anaphor are annotated in MSR. In WSJ, zero cases are not annotated, and only realized expressions are considered. For this reason, we decided to include only realized expressions (namely common nouns, proper names and pronouns) in our study and exclude the covert references. Hence, as mentioned before, the task in this study is to predict whether a target referring expression is a pronoun, a proper name or a common noun. The total number of referring expressions is 9306 in MSR and 21565 in WSJ, of which we placed 70% in a training set and 30% in a test set. As shown in Table 2 , the documents in WSJ are roughly 4 times longer than the documents in MSR. Also, each document has a greater number of sentences and paragraphs. We expect that in the ML studies, the WSJ algorithms overall have a lower accuracy than the MSR algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 693, |
|
"end": 700, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpora used in this study", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In order to assess the recency metrics, the first step is to create a baseline algorithm which contains no recency metric. This enables us to compare the performance of the experimental algorithms incorporating recency metrics against the baseline. We could have chosen different features, but we chose grammatical role of the current mention and grammatical role of the previous mention as the features of the baseline system for the following reasons: Using grammatical role is a safe choice, because the same syntactic categories were used in both corpora, so any differences in performance between the two corpora will not be due to differences in the annotations. Furthermore, we wanted to make sure that the features in the baseline algorithm are not confounding with recency metrics. For example, a competition-based feature such as the number of competing discourse entities between the two mentions would be confounding because the more competition there is, the greater the distance between the referent and the antecedent is likely to be. For this reason, we chose an algorithm that did not use anything other than grammatical role.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline algorithms and ML method", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "In this study, we use Multi-Layer Perceptron (henceforth MLP), a class of feedforward artificial neural networks as our ML approach. The model has two hidden layers with respectively 16 and 8 units. While hidden layers use the rectified linear activation function (ReLU), the output layer uses the softmax activation function. The model will be fit for 50 training epochs, and 50 samples (batch size) are being propagated through the network. It is noteworthy that since MLP cannot handle categorical data, all categorical metrics have been onehot encoded in this study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline algorithms and ML method", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This section firstly reports on the success of the baseline algorithms, and continues with the algorithms incorporating the recency metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Assessing recency metrics using MLP", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We mentioned in the previous section that the baseline algorithms are made up of two features, the grammatical role of the current mention and the grammatical role of its antecedent. Table 3 shows the accuracy of the two baseline algorithms. MSR WSJ baseline 0.585 0.55 Table 3 : Accuracy of the MSR and WSJ baseline algorithms", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 190, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 270, |
|
"end": 277, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Baseline algorithms", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Each experimental algorithm is composed of two baseline features and one recency metric. For instance, model 4 includes grammatical role of the current mention and the antecedent plus metric 4, which is the numerical distance in sentences. Since there are 15 different recency metrics and two different corpora, the total number of experimental algorithms is 30. If, for instance, an experimental algorithm would have 2 recency metrics instead of one, we would not be able to firmly test whether both features contribute to the performance of the algorithm, or only one of them is involved. For this reason, each metric is tested individually, and not in combination with other recency metrics. The overall accuracy of the experimental algorithms incorporating different recency metrics is reported in Table 4 : Accuracy of the experimental algorithms. The first column, Meas(urement) Unit specifies metrics' units of measurement detailed in section 4.1, II.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 802, |
|
"end": 809, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Assessing recency metrics", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The reported accuracies are all higher than the baseline accuracy, but it is still unclear whether the recency metrics are strongly informative of the probability of the increase in the accuracy of the algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit of measurement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We conducted Bayes Factor (henceforth BF) analysis using a beta distribution to investigate whether the outcomes of the experimental and the baseline algorithms come from distributions with the same underlying probability parameter, or ones with different underlying parameters. Hence, in the case of our current assessment, BF is used to determine whether or not there is good evidence for saying that the difference in accuracy rates of the models is less or greater than 0.01 (henceforth threshold). If the difference in accuracy is below the threshold, the evidence is in favor of similar distributions; if it is above the threshold, there is good evidence that the outcomes come from different distributions. In case of being from different distributions, we infer that the inclusion of recency metrics leads to an improvement in the performance of experimental algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit of measurement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Additionally, the strength of evidence for each experimental model versus the baseline will be assessed according to the scale of Kass and Raftery (1995) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 153, |
|
"text": "Kass and Raftery (1995)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unit of measurement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Not worth more than a bare mention 3 to 20 Positive 20 to 150 Strong >150 Very strong Table 5 : Interpretation of Bayes Factors according to Kass and Raftery (1995, p. 777) For the sake of space, we only report the results suggesting that the outcomes of the experimental and the baseline algorithms come from different distributions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 172, |
|
"text": "Kass and Raftery (1995, p. 777)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 93, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BF Interpretation 1 to 3", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Comparing the rate of correct predictions of each experimental model to that of the baseline shows positive evidence that the accuracy of model 15, the one incorporating distance in paragraph as its recency metric, comes from different distribution than the baseline (BF=3.286). The other models were doing better than the baseline too, but there is insufficient evidence to say they are different from the baseline. More research is needed to investigate why other experimental models are not statistically different from the baseline.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BF analysis of the MSR models", |
|
"sec_num": "5.2.1" |
|
}, |
|
{ |
|
"text": "In the case of WSJ, the accuracy rates of 8 models are different from the accuracy of the baseline. Similar to MSR, the outcome of model 15, utilizing the paragraph-based recency metric, comes from distributions with different underlying probabilities than the baseline. Additionally, except the outcome of model 5, there is very strong evidence that the accuracy of all other models (6 models in total) incorporating sentence-based recency metrics are being shifted by more than 0.01 beyond the baseline. This means, 6 out of 7 sentence-based recency metrics have improved the performance of the algorithms over the baseline. The remaining model with a different accuracy than the baseline is model 12, having NP distance as its recency metric. Table 5 , there is very strong evidence that the accuracy rates of all these models are different from the baseline. The column Def presents very briefly the definition of the metrics according to Table 1 . For instance, cat(4) means the categorical distance in 4 bins.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 746, |
|
"end": 753, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 943, |
|
"end": 950, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BF analysis of the WSJ models", |
|
"sec_num": "5.2.2" |
|
}, |
|
{ |
|
"text": "As a next step, we compare the best performing models of each unit of measurement with each other. Since the only difference between the models is in their recency metrics, if there is good evidence that the difference in the accuracy of the models is greater than the threshold, we conclude that this difference is due to the differences in the recency metrics. Table 7 illustrates the best performing algorithms of each unit of measurement.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 363, |
|
"end": 370, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "BF analysis of the best performing models", |
|
"sec_num": "5.2.3" |
|
}, |
|
{ |
|
"text": "We conducted a one to one comparison between the best performing models of each unit. The evidence suggests that these models are not statistically different from each other. Table 7 : Best performing algorithms of each unit of measurement from each other. In other words, if we only focus on the WSJ corpus, we do not have enough evidence to prefer one model over another, and we can conclude that the best performing models incorporating sentence, paragraph and NP level recency metrics are equally good. But when we did a one to one comparison between these three models and the best performing models of word and markable units, we found out that the accuracy rates of each of these models have been shifted by more than 0.01 beyond the accuracy rates of the word and markable models. This means, the models incorporating paragraph, sentence and NP level metrics are statistically different from the models incorporating word and markable level information.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 182, |
|
"text": "Table 7", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "I. MSR models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As discussed in this section, the recency metrics clearly made a bigger improvement in the WSJ models. In the case of MSR, only one model had a distinguishable performance; while in the case of WSJ, 8 models performed statistically better than the baseline. Furthermore, sentence, paragraph and NP-based metrics evidentially improved the performance of the WSJ algorithms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. WSJ models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The results reported in this section were based on the assessment of single recency metrics; yet, there is no assessment of the combination of these metrics. In the next section, we report on a feature selection study we conducted to investigate which combinations of recency metrics lead to best results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "II. WSJ models", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to investigate the extent to which the combination of different recency metrics improves the performance, we run a Sequential Forward Search (SFS) algorithm. The algorithm starts with an empty set and adds features to the model up to the point that no further improvement occurs. For this study, we used the R package mlr (Bischl et al., 2016) with the learner classif.mlp, and 5-fold cross-validation resampling strategy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 331, |
|
"end": 352, |
|
"text": "(Bischl et al., 2016)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequential Forward Search", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The result of the MSR experiment shows that the two recency metrics playing the most important roles are metric 15, distance in paragraph, and metric 9, exponential distance in sentences. Retraining the MLP algorithm on the new model, the accuracy is 0.637. The Bayes Factor analysis provides strong evidence that the outcome of this model is statistically different from the baseline (BF = 26.11).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequential Forward Search", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In the WSJ SFS experiment, metric 15, distance in paragraphs, and metric 8, log distance in sentences, were chosen as the two recency features whose combination produced the best result. The model trained on the combination of these two metrics had the accuracy of 0.631. The Bayes Factor analysis finds very strong evidence that the outcomes of the baseline and this model are coming from different distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequential Forward Search", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "What stands out in this experiment is that in the case of both MSR and WSJ, distance in paragraph is chosen as one of the recency metrics. The other chosen measures are exponential distance in MSR and logarithmic distance in WSJ. This could indicate that the algorithm is sensitive to the encoding of the sentence-based metrics. More experimentation in a more elaborated feature-based study is necessary to test this point.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sequential Forward Search", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our goal was to shed light on different interpretations of recency, and to find out which of these interpretations are most effective for referential choice prediction. A subsidiary goal was to investigate whether the choice of recency metric should take corpus-specific features such as text genre and text structure into consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The findings of this study should be of interest to theoretical and computational linguists alike, because both groups of researchers have studied the relation between recency and referential choice. In the linguistic tradition, the notion of recency has often been studied without a clear definition being offered (section 2). In the computational tradition, by contrast, researchers have dwelt less on theoretical justification but have had to provide precise definitions, to ensure that their algorithms are able to deal with a broad range of inputs. For example, Kibrik et al. 2016 Another difference is that in the linguistic tradition, researchers usually think of recency as operating solely on the sentence or paragraph levels; while in computational works, less conventional metrics such as measuring the distance in words or NPs have been also practiced. We believe that the existence of a wider range of recency metrics in computational feature-based studies has the potential to open new windows into a better understanding of recency, and can encourage a re-evaluation of recency in the linguistic tradition. What is missing from many computational works is an explanation of why a certain metric or a certain way of encoding has been chosen over another. The findings from this study make the following contributions to the literature:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Creating a taxonomy of recency metrics After providing an overview of the most prevalent interpretations of recency in the linguistic tradition, we scrutinized the feature-based ML studies and provided, for the first time as far as we know, a taxonomy of recency metrics. The importance of this taxonomy is firstly that we do not know of any available work classifying and analyzing this notion comprehensively, so this work could be a starting point for getting deeper into the notion of recency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Secondly, we have shed light on the differences between these metrics. Knowing what the differences are, and where they stem from, could be the first step in dissecting various aspects of this notion and developing new, improved recency metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Assessing a wide range of recency metrics We have assessed individual metrics using the Multilayer Perceptron algorithm, and conducted a Bayes Factor analysis using a beta distribution to investigate whether there is evidence that the models incorporating recency metrics come from different distributions than the baseline algorithms. Additionally, we conducted a Bayes Factor analysis between the best performing models of each measurement unit to see whether there is enough evidence that the outcomes of models are different from each other.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The evidence reported in Table 6 for the models built on the WSJ corpus suggests that the outcome of the models incorporating NP, paragraph and sentence metrics have been shifted by more than 0.01 beyond the baseline's outcome. Also, we have strong evidence to believe that these models are sta-tistically different from the models incorporating word and markable distance measures.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 32, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Additionally, the results of the Sequential Forward Search experiment show that, for both corpora, a combination of the paragraph-based and one of the sentence-based metrics leads to the best performance. This finding is important because it provides some direction in choosing recency metrics for feature-based computational studies. Furthermore, the Bayes Factor analysis and SFS combined suggest that \"higher-level\" metrics such as distance in paragraphs and sentences might result in greater changes in the performance of the algorithms than \"lower-level\" metrics based on counting words or markables. Finally, it raises the question of why a measurement such as distance in the number of sentences performs better than a measurement such as distance in the number of words. This is notable because the distance in words might be more indicative of the physical distance between the mentions, considering that sentences can vary enormously in length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Another interesting observation is that some encoding solutions are more successful than others. For instance, the sentential distance in metric 5 is grouped into 2 bins of +/-2 sentences, while in metric 6, the distance is grouped into 4 bins of 0, 1, 2 or more than 2 sentences. While the former metric leads to a marginal difference in the performance of the algorithms, the latter contributes more to the improvement of the accuracy. These subtle differences in encoding and the great impact that they can make should be the focus of more experimentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Another major finding was the important role of distance measured in paragraphs. The Bayes Factor analysis showed that there is strong evidence for the differences between the performance of the baseline and the algorithms incorporating this metric. Also, using the SFS algorithm, this metric was selected in both MSR and WSJ as a feature contributing to the improvement of the results. The important role of paragraph information is in line with what we presented in section 2 under the topic \"Unit boundary\". According to Vonk et al. (1992) , episode boundaries can decrease the accessibility of a referent, resulting in re-mentioning with full NPs. This might be the reason that including paragraph distance, and signaling whether or not the antecedent is in a different paragraph, makes the referential choice prediction simpler for the algo-rithms. The surprising point is that despite the major role of paragraph information, the only study from subsection 4.1 which has used the paragraph distance metric is Kibrik et al. (2016) . The results from the current study could motivate a greater focus on paragraph-based information in featurebased studies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 524, |
|
"end": 542, |
|
"text": "Vonk et al. (1992)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1035, |
|
"text": "Kibrik et al. (2016)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Importance of the choice of corpus Surprisingly, the results of this study showed that recency measures were of greater importance when applied to WSJ than to MSR. In case of the MSR models, the only metric which in isolation led to a distribution different from the baseline was distance in the number of paragraphs, while in the case of WSJ, 8 different recency metrics led to major differences. One possible reason for the different behavior of recency metrics could be that due to unbalanced number of referring expression types (more than 50% pronouns and less than 20% common names), MSR is, most likely, not a suitable corpus for a three-way referential choice task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "It can be seen from the data in Table 2 that except the length of the sentences which is almost equal in both corpora, other text structure features, such as the number of words, sentences and paragraphs are very different from each other (with WSJ having almost 4 times more words, sentences and paragraphs). One speculation is that lengthrelated features modulate the importance of the recency metrics in the ML models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 32, |
|
"end": 39, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Further research is needed to identify the causes of this difference. However, based on our study, one might conclude that the more complex the discourse structure, the greater the role of recency measures. If this is true, it would be of great importance to carefully inspect the characteristics of the textual source prior to deciding which features to include in the study, as apparently, the choice of recency metric should depend on text genre and structure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The exponential distance is not reported for WSJ in this study.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As this corpus is used in the GREC-MSR Shared Tasks, we abbreviate its name to MSR.4 We decided to exclude GREC-People, the other corpus used in these Shared Tasks because after the exclusion of the first mention expressions, only 121 instances of common nouns (2.16% of the whole data) were left. In a pilot study, we found out that the data is not enough for a three-way referential choice prediction task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Accessing Noun-Phrase Antecedents", |
|
"authors": [ |
|
{ |
|
"first": "Mira", |
|
"middle": [ |
|
"Ariel" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mira Ariel. 1990. Accessing Noun-Phrase Antecedents. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects", |
|
"authors": [ |
|
{ |
|
"first": "Mira", |
|
"middle": [ |
|
"Ariel" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "8", |
|
"issue": "", |
|
"pages": "29--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mira Ariel. 2001. Accessibility theory: An overview. Text representation: Linguistic and psycholinguistic aspects, 8:29-87.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "How speakers refer: The role of accessibility", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arnold", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Language and Linguistics Compass", |
|
"volume": "4", |
|
"issue": "4", |
|
"pages": "187--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer E Arnold. 2010. How speakers refer: The role of accessibility. Language and Linguistics Compass, 4(4):187-203.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Reference production in young speakers with and without autism: Effects of discourse status and processing constraints", |
|
"authors": [ |
|
{ |
|
"first": "Jennifer", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Loisa", |
|
"middle": [], |
|
"last": "Bennetto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Diehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Cognition", |
|
"volume": "110", |
|
"issue": "2", |
|
"pages": "131--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer E Arnold, Loisa Bennetto, and Joshua J Diehl. 2009. Reference production in young speakers with and without autism: Effects of discourse status and processing constraints. Cognition, 110(2):131-146.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The effect of additional characters on choice of referring expression: Everyone counts", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Jennifer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Arnold", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zenzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Griffin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Journal of memory and language", |
|
"volume": "56", |
|
"issue": "4", |
|
"pages": "521--536", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jennifer E Arnold and Zenzi M Griffin. 2007. The ef- fect of additional characters on choice of referring expression: Everyone counts. Journal of memory and language, 56(4):521-536.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The GREC challenges 2010: overview and evaluation results", |
|
"authors": [ |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Belz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Kow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 6th international natural language generation conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "219--229", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anja Belz and Eric Kow. 2010. The GREC challenges 2010: overview and evaluation results. In Proceed- ings of the 6th international natural language gen- eration conference, pages 219-229. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Generating referring expressions in context: The task evaluation challenges", |
|
"authors": [ |
|
{ |
|
"first": "Anja", |
|
"middle": [], |
|
"last": "Belz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Kow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jette", |
|
"middle": [], |
|
"last": "Viethen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Gatt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Empirical methods in natural language generation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "294--327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anja Belz, Eric Kow, Jette Viethen, and Albert Gatt. 2010. Generating referring expressions in context: The task evaluation challenges. In Empirical meth- ods in natural language generation, pages 294-327. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "mlr: Machine Learning in R", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bischl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Lang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lars", |
|
"middle": [], |
|
"last": "Kotthoff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Schiffner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Richter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erich", |
|
"middle": [], |
|
"last": "Studerus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Casalicchio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Jones", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "The Journal of Machine Learning Research", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "5938--5942", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bischl, Michel Lang, Lars Kotthoff, Julia Schiffner, Jakob Richter, Erich Studerus, Giuseppe Casalicchio, and Zachary M Jones. 2016. mlr: Ma- chine Learning in R. The Journal of Machine Learn- ing Research, 17(1):5938-5942.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "IS-G: The comparison of different learning techniques for the selection of the main subject references", |
|
"authors": [ |
|
{ |
|
"first": "Bernd", |
|
"middle": [], |
|
"last": "Bohnet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Fifth International Natural Language Generation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--193", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernd Bohnet. 2008. IS-G: The comparison of differ- ent learning techniques for the selection of the main subject references. In Proceedings of the Fifth Inter- national Natural Language Generation Conference, pages 192-193. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Centering attention in discourse. Language and Cognitive processes", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Susan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Brennan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "137--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Susan E Brennan. 1995. Centering attention in discourse. Language and Cognitive processes, 10(2):137-167.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Annotation guidelines. pocos-potsdam coreference scheme", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Chiarcos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Krasavina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christian Chiarcos and Olga Krasavina. 2005. Annota- tion guidelines. pocos-potsdam coreference scheme. Unpublished manuscript.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Computational models of referring: a study in cognitive science", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kees Van Deemter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kees van Deemter. 2016. Computational models of re- ferring: a study in cognitive science. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Towards more variation in text generation: Developing and evaluating variation models for choice of referential form", |
|
"authors": [ |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Thiago Castro Ferreira", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sander", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wubben", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "568--577", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thiago Castro Ferreira, Emiel Krahmer, and Sander Wubben. 2016. Towards more variation in text gen- eration: Developing and evaluating variation models for choice of referential form. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 568-577.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Discourse Structure and Anaphora: Written and Conversational English. Cambridge Studies in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Fox", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara A. Fox. 1987. Discourse Structure and Anaphora: Written and Conversational English. Cambridge Studies in Linguistics. Cambridge Uni- versity Press.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The effect of animacy on the choice of referring expression", |
|
"authors": [ |
|
{ |
|
"first": "Kumiko", |
|
"middle": [], |
|
"last": "Fukumura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Roger Pg Van Gompel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Language and cognitive processes", |
|
"volume": "26", |
|
"issue": "10", |
|
"pages": "1472--1504", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kumiko Fukumura and Roger PG van Gompel. 2011. The effect of animacy on the choice of referring expression. Language and cognitive processes, 26(10):1472-1504.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Gatt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "61", |
|
"issue": "", |
|
"pages": "65--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Topic continuity in discourse: An introduction. Topic continuity in discourse: A quantitative cross-language study", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Talmy Giv\u00f3n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Talmy Giv\u00f3n. 1983. Topic continuity in discourse: An introduction. Topic continuity in discourse: A quan- titative cross-language study, 3:1-42.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The grammar of referential coherence as mental processing instructions. Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Talmy Giv\u00f3n", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Talmy Giv\u00f3n. 1992. The grammar of referential coher- ence as mental processing instructions. Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Udel: generating referring expressions guided by psycholinguistic findings", |
|
"authors": [ |
|
{ |
|
"first": "Charles", |
|
"middle": [], |
|
"last": "Greenbacker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on Language Generation and Summarisation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charles Greenbacker and Kathleen McCoy. 2009. Udel: generating referring expressions guided by psycholinguistic findings. In Proceedings of the 2009 Workshop on Language Generation and Sum- marisation, pages 101-102. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Cnts: Memory-based learning of generating repeated references", |
|
"authors": [ |
|
{ |
|
"first": "Iris", |
|
"middle": [], |
|
"last": "Hendrickx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kim", |
|
"middle": [], |
|
"last": "Luyckx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Morante", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Van Asch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Fifth International Natural Language Generation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "194--195", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Iris Hendrickx, Walter Daelemans, Kim Luyckx, Roser Morante, and Vincent Van Asch. 2008. Cnts: Memory-based learning of generating repeated refer- ences. In Proceedings of the Fifth International Nat- ural Language Generation Conference, pages 194- 195. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Pronominalization revisited", |
|
"authors": [ |
|
{ |
|
"first": "Renate", |
|
"middle": [], |
|
"last": "Henschel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 18th conference on Computational linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "306--312", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Renate Henschel, Hua Cheng, and Massimo Poesio. 2000. Pronominalization revisited. In Proceedings of the 18th conference on Computational linguistics- Volume 1, pages 306-312. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Long distance pronominalisation and global focus", |
|
"authors": [ |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Hitzeman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "The 17th International Conference on Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janet Hitzeman and Massimo Poesio. 1998. Long dis- tance pronominalisation and global focus. In COL- ING 1998 Volume 1: The 17th International Confer- ence on Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Resolving pronoun references", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Jerry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hobbs", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "Lingua", |
|
"volume": "44", |
|
"issue": "4", |
|
"pages": "311--338", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jerry R Hobbs. 1978. Resolving pronoun references. Lingua, 44(4):311-338.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Ontonotes: the 90% solution", |
|
"authors": [ |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the human language technology conference of the NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the human lan- guage technology conference of the NAACL, Com- panion Volume: Short Papers, pages 57-60.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Using discourse features for referring expression generation", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Jamison", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 5th Meeting of the Midwest Computational Linguistics Colloquium (MCLC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Jamison. 2008. Using discourse features for re- ferring expression generation. In Proceedings of the 5th Meeting of the Midwest Computational Linguis- tics Colloquium (MCLC).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Osu-2: Generating referring expressions with a maximum entropy classifier", |
|
"authors": [ |
|
{ |
|
"first": "Emily", |
|
"middle": [], |
|
"last": "Jamison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dennis", |
|
"middle": [], |
|
"last": "Mehay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Fifth International Natural Language Generation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "196--197", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emily Jamison and Dennis Mehay. 2008. Osu-2: Gen- erating referring expressions with a maximum en- tropy classifier. In Proceedings of the Fifth Inter- national Natural Language Generation Conference, pages 196-197. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Bayes factors", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Robert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Kass", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Raftery", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Journal of the american statistical association", |
|
"volume": "90", |
|
"issue": "430", |
|
"pages": "773--795", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert E Kass and Adrian E Raftery. 1995. Bayes fac- tors. Journal of the american statistical association, 90(430):773-795.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Referential choice: Predictability and its limits", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Andrej", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mariya", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Kibrik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Khudyakova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Grigory", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anastasia", |
|
"middle": [], |
|
"last": "Dobrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitrij A", |
|
"middle": [], |
|
"last": "Linnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zalmanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Frontiers in psychology", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrej A Kibrik, Mariya V Khudyakova, Grigory B Dobrov, Anastasia Linnik, and Dmitrij A Zalmanov. 2016. Referential choice: Predictability and its lim- its. Frontiers in psychology, 7(1429).", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Computational Generation of Referring Expressions: An Updated Survey", |
|
"authors": [ |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kees Van Deemter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emiel Krahmer and Kees van Deemter. 2019. Com- putational Generation of Referring Expressions: An Updated Survey. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Generating anaphoric expressions: pronoun or definite description", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Kathleen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Strube", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "The Relation of Discourse/Dialogue Structure and Reference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kathleen F McCoy and Michael Strube. 1999. Gen- erating anaphoric expressions: pronoun or definite description? In The Relation of Discourse/Dialogue Structure and Reference.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Modeling semantic expectation: Using script knowledge for referent prediction", |
|
"authors": [ |
|
{ |
|
"first": "Ashutosh", |
|
"middle": [], |
|
"last": "Modi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ivan", |
|
"middle": [], |
|
"last": "Titov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vera", |
|
"middle": [], |
|
"last": "Demberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "31--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashutosh Modi, Ivan Titov, Vera Demberg, Asad Say- eed, and Manfred Pinkal. 2017. Modeling seman- tic expectation: Using script knowledge for referent prediction. Transactions of the Association for Com- putational Linguistics, 5:31-44.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "WLV: A confidence-based machine learning method for the GREC-NEG'09 task", |
|
"authors": [ |
|
{ |
|
"first": "Constantin", |
|
"middle": [], |
|
"last": "Or\u0203san", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iustin", |
|
"middle": [], |
|
"last": "Dornescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Workshop on Language Generation and Summarisation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--108", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Constantin Or\u0203san and Iustin Dornescu. 2009. WLV: A confidence-based machine learning method for the GREC-NEG'09 task. In Proceedings of the 2009 Workshop on Language Generation and Summarisa- tion (UCNLG+Sum 2009), pages 107-108. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Centering: A parametric theory and its instantiations", |
|
"authors": [ |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rosemary", |
|
"middle": [], |
|
"last": "Stevenson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"Di" |
|
], |
|
"last": "Eugenio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janet", |
|
"middle": [], |
|
"last": "Hitzeman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Computational linguistics", |
|
"volume": "30", |
|
"issue": "3", |
|
"pages": "309--363", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Massimo Poesio, Rosemary Stevenson, Barbara Di Eu- genio, and Janet Hitzeman. 2004. Centering: A para- metric theory and its instantiations. Computational linguistics, 30(3):309-363.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Towards robust linguistic analysis using ontonotes", |
|
"authors": [ |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sameer Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Bj\u00f6rkelund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yuchen", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "143--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Pro- ceedings of the Seventeenth Conference on Computa- tional Natural Language Learning, pages 143-152.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Single and multi-objective optimization for feature selection in anaphora resolution", |
|
"authors": [ |
|
{ |
|
"first": "Sriparna", |
|
"middle": [], |
|
"last": "Saha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asif", |
|
"middle": [], |
|
"last": "Ekbal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olga", |
|
"middle": [], |
|
"last": "Uryupina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimo", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "93--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sriparna Saha, Asif Ekbal, Olga Uryupina, and Mas- simo Poesio. 2011. Single and multi-objective opti- mization for feature selection in anaphora resolution. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 93-101.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Referring expressions in discourse, Cambridge Studies in Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Carlota", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carlota S. Smith. 2003. Referring expressions in discourse, Cambridge Studies in Linguistics, page 123-152. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Coherence and grounding in discourse: outcome of a symposium", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Russell S Tomlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1984, |
|
"venue": "John Benjamins Publishing", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Russell S Tomlin. 1987. Coherence and grounding in discourse: outcome of a symposium, Eugene, Ore- gon, June 1984, volume 11. John Benjamins Pub- lishing.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "The use of referential expressions in structuring discourse. Language and cognitive processes", |
|
"authors": [ |
|
{ |
|
"first": "Wietske", |
|
"middle": [], |
|
"last": "Vonk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gmm", |
|
"middle": [], |
|
"last": "Lettica", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hustinx", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Simons", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "301--333", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wietske Vonk, Lettica GMM Hustinx, and Wim HG Si- mons. 1992. The use of referential expressions in structuring discourse. Language and cognitive pro- cesses, 7(3-4):301-333.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "defined 7 different implementations of the notion of recency taking different units of measurement into account; while Saha et al. (2011) employed various implementations of sentence-related metrics.", |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "List of metrics collected from different ML studies", |
|
"content": "<table><tr><td>the values between two numbers [0,1], using the</td></tr><tr><td>following formula:</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"text": "Comparison of the MSR and WSJ corpora in terms of length-related features and number of different types of referring expressions. Mean n of chains, meaning mean number of different annotated referents in a document, is not reported for MSR because only one chain per document is annotated.", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">Meas Unit Name</td><td>MSR</td><td>WSJ</td></tr><tr><td/><td>model 1</td><td>0.60</td><td>0.576</td></tr><tr><td>Word</td><td>model 2</td><td colspan=\"2\">0. 594 0. 551</td></tr><tr><td/><td>model 3</td><td colspan=\"2\">0.592 0. 572</td></tr><tr><td/><td>model 4</td><td>0.607</td><td>0.62</td></tr><tr><td/><td>model 5</td><td colspan=\"2\">0. 588 0. 582</td></tr><tr><td/><td>model 6</td><td colspan=\"2\">0.608 0. 622</td></tr><tr><td>Sentence</td><td>model 7 model 8</td><td colspan=\"2\">0.602 0.622 0.607 0.611</td></tr><tr><td/><td>model 9</td><td>0.609</td><td>-</td></tr><tr><td/><td colspan=\"3\">model 10 0.589 0.597</td></tr><tr><td/><td colspan=\"3\">model 11 0.602 0.604</td></tr><tr><td>NP</td><td>model 12</td><td>0.59</td><td>0.623</td></tr><tr><td>Markable</td><td colspan=\"3\">model 13 model 14 0.594 0.561 -0.577</td></tr><tr><td colspan=\"4\">Paragraph model 15 0.625 0. 616</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"text": "Bayes Factor analysis giving the ratio of probabilities that the underlying accuracy rates are within 1% of each other or not. According to the scale ofKass and Raftery (1995) presented in", |
|
"content": "<table/>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |