ACL-OCL / Base_JSON /prefixC /json /codi /2021.codi-sharedtask.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:34.357767Z"
},
"title": "The CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis Resolution in Dialogue: A Cross-Team Analysis",
"authors": [
{
"first": "Shengjie",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology Research Institute University of Texas at Dallas Richardson",
"location": {
"postCode": "75083-0688",
"region": "TX"
}
},
"email": ""
},
{
"first": "Hideo",
"middle": [],
"last": "Kobayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology Research Institute University of Texas at Dallas Richardson",
"location": {
"postCode": "75083-0688",
"region": "TX"
}
},
"email": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology Research Institute University of Texas at Dallas Richardson",
"location": {
"postCode": "75083-0688",
"region": "TX"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The CODI-CRAC 2021 shared task is the first shared task that focuses exclusively on anaphora resolution in dialogue and provides three tracks, namely entity coreference resolution, bridging resolution, and discourse deixis resolution. We perform a cross-task analysis of the systems that participated in the shared task in each of these tracks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The CODI-CRAC 2021 shared task is the first shared task that focuses exclusively on anaphora resolution in dialogue and provides three tracks, namely entity coreference resolution, bridging resolution, and discourse deixis resolution. We perform a cross-task analysis of the systems that participated in the shared task in each of these tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The CODI-CRAC 2021 shared task (Khosla et al., 2021) , which focuses on anaphora resolution in dialogue, provides three tracks, namely entity coreference resolution, bridging resolution, and discourse deixis/abstract anaphora resolution. Among these three tracks, bridging resolution and discourse deixis resolution are relatively under-studied problems. This is particularly so in the context of dialogue processing. This shared task is therefore of potential interest to researchers in the discourse and dialogue communities, particularly researchers in anaphora resolution who intend to work on problems beyond identity coreference.",
"cite_spans": [
{
"start": 31,
"end": 52,
"text": "(Khosla et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal in this paper is to perform a cross-team analysis of the systems participating in the three tracks of the shared task. Our analysis is partly quantitative, where we attempt to draw conclusions based on statistics computed using the outputs of the systems, and partly qualitative, where we discuss the strengths and weaknesses of the systems based on our manual inspection of these outputs. While several attempts have been made to perform an analysis of different coreference systems (e.g., Kummerfeld and Klein (2013) , Lu and Ng (2020) ), we note that conducting an insightful analysis of these systems is inherently challenging for at least two reasons. First, for entity coreference resolution and discourse deixis resolution, the latter of which is treated as a general case of event coreference, the system outputs on which we perform our analysis is in the form of clusters. Hence, we do not have information about which links were posited by a system and used to create a given cluster. This makes it impossible to pinpoint the mistakes (i.e., the erroneous linking decisions) made by a system and fundamentally limits our ability to explain the behavior of a system. Second, even if we could pinpoint the mistakes, existing models for anaphora resolution have become so complex that it is virtually impossible to explain why a particular mistake was made. For instance, a mention extraction component is so closely tied to a resolution model that it is not always possible to determine whether a mistake can be attributed to erroneous mention extraction or resolution. Worse still, since the participants have the freedom to partition the available training and development datasets in any way they want for model training and parameter tuning and are even allowed to exploit external training corpora, it makes it even harder to determine whether a system performs better because of a particular way of partitioning the data or because external training data are used.",
"cite_spans": [
{
"start": 500,
"end": 527,
"text": "Kummerfeld and Klein (2013)",
"ref_id": "BIBREF9"
},
{
"start": 530,
"end": 546,
"text": "Lu and Ng (2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows. The next three sections describe our cross-team analysis for the three tracks, namely entity coreference (Section 2), bridging (Section 3), and discourse deixis (Section 4). We present our conclusions and observations in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we analyze the results of the four teams that participated in the anaphora resolution track and submitted a shared task paper, namely the team from Emory University (Xu and Choi, 2021) (henceforth Emory), the team from the University of Texas at Dallas (Kobayashi et al., 2021 ) (henceforth UTD), the team from Korea University (Kim et al., 2021 ) (henceforth KU), and the DFKI team (Anikina et al., 2021) ",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "(Xu and Choi, 2021)",
"ref_id": "BIBREF21"
},
{
"start": 270,
"end": 293,
"text": "(Kobayashi et al., 2021",
"ref_id": "BIBREF7"
},
{
"start": 345,
"end": 362,
"text": "(Kim et al., 2021",
"ref_id": "BIBREF6"
},
{
"start": 400,
"end": 422,
"text": "(Anikina et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Coreference Resolution",
"sec_num": "2"
},
{
"text": "Since mention extraction has a large impact on coreference resolution performance (Pradhan et al., 2011 (Pradhan et al., , 2012 , let us first consider the mention extraction performance of the participating systems. Table 1 presents the mention extraction results in the standard manner, expressing the results of each system on each corpus in terms of recall, precision, and F-score. Specifically, a mention is considered correctly detected if it has an exact match with a gold mention in terms of boundary. In terms of Fscore, Emory and UTD achieve comparable performance, and both of them outperform KU and DFKI. Except for DFKI, all the systems achieve an average mention extraction F-score of more than 85%. These mention extraction results are much better than those achieved by traditional coreference resolvers, and are consistent with Lu and Ng's (2020) observation that mention detection performance has improved significantly over the years, particularly after the introduction of span-based neural coreference models (Lee et al., 2017 (Lee et al., , 2018 . Considering the recall and precision numbers, we see that Emory and KU are recall-oriented, whereas UTD and DFKI are precision-oriented. Specifically, Emory and KU achieve the highest recall, whereas UTD achieves the highest precision.",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "(Pradhan et al., 2011",
"ref_id": "BIBREF17"
},
{
"start": 104,
"end": 127,
"text": "(Pradhan et al., , 2012",
"ref_id": "BIBREF16"
},
{
"start": 845,
"end": 863,
"text": "Lu and Ng's (2020)",
"ref_id": "BIBREF12"
},
{
"start": 1030,
"end": 1047,
"text": "(Lee et al., 2017",
"ref_id": "BIBREF10"
},
{
"start": 1048,
"end": 1067,
"text": "(Lee et al., , 2018",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "2.1"
},
{
"text": "To gain additional insights into the mention extraction results achieved by these systems, we present these results from a different point of view in Table 2 . We first divide the mentions into 10 groups, which are shown in Table 3 . As can be seen, the first nine groups focus on different kinds of pronouns, whereas the last group is composed of non-pronominal mentions. We note that the classification of the mentions in the test corpus into these 10 groups is not error-free: since we rely on part-of-speech tags and the surface forms to identify pronouns, words that appear in the corpus such as \"well\" (which corresponds to \"we'll\") and \"were\" (which corresponds to \"we're\") should belong to Group 1 but are being misclassified as non-pronominal.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 2",
"ref_id": null
},
{
"start": 224,
"end": 231,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "2.1"
},
{
"text": "Consider first Table 2a , where results are aggregated over the four datasets. The \"%\" and \"count\" columns show the percentage and number of gold mentions that belong to each group. \"none\" shows the fraction of mentions that are not detected by any of the four participating systems. E (Emory), T (UTD), K (KU), and D (DFKI) show the percentage of gold mentions successfully extracted by each of these systems. E-only, T-only, K-only, and D-only show the percentage of gold mentions that are successfully extracted by exactly one of the systems. For instance, E-only shows the fraction of mentions successfully extracted by the Emory resolver but not the other three.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 23,
"text": "Table 2a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "2.1"
},
{
"text": "A few points deserve mention. First, despite the fact that Group 10 (non-pronominal mentions) is the largest group, approximately half of the mentions are pronominal. The largest pronoun groups are Group 1 (1st and 2nd person pronouns (e.g., \"I\", \"we\")), Group 3 (3rd person ungendered pronouns (e.g., \"it\", \"they\")), Group 5 (reflexive pronouns (e.g., \"myself\", \"yourself\")), and Group 7 (demonstrative pronouns (e.g., \"this\", \"that\")). This should not be surprising given the prevalence of these pronouns in dialogue, but their prevalence suggests the importance of pronoun resolution in the shared task. Second, considering the \"only\" columns, we see that the percentage of mentions that are uniquely identified by one of the systems is relatively small. Third, Emory and KU extract more gold mentions than UTD and DFKI. As can be seen in the \"Overall\" row, Emory and KU manage to extract more than 90% of the gold mentions. These results are consistent with those shown in Table 1 . Perhaps the biggest difference among the systems lies in the extraction of non-pronominal mentions: Group 10 is the group for which Emory and KU clearly demonstrate superior extraction performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 977,
"end": 984,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "2.1"
},
{
"text": "While Table 2a focuses on gold mention extraction, Table 2b focuses on the extraction of erroneous mentions. Specifically, we take the union of the set of erroneous mentions extracted by the Table 2 : Entity coreference resolution: per-class mention extraction results. 1 1st and 2nd person pronouns 2 3rd person gendered pronouns 3 3rd person ungendered pronouns 4 Possessive pronouns 5 Reflexive pronouns 6 Indefinite pronouns 7 Demonstrative pronouns 8 Relative and interrogative pronouns 9 Other pronouns 10 Non-pronominal noun phrases four resolvers and compute statistics based on the resulting set, which we refer to as S. The columns in Table 2b can be interpreted in the same way as those in Table 2a . For instance, E, T, K, and D show the percentage of mentions in S extracted by each of the systems, and E-only, T-only, K-only, and D-only show the percentage of mentions in S extracted by exactly one of the systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 14,
"text": "Table 2a",
"ref_id": null
},
{
"start": 51,
"end": 59,
"text": "Table 2b",
"ref_id": null
},
{
"start": 191,
"end": 198,
"text": "Table 2",
"ref_id": null
},
{
"start": 270,
"end": 509,
"text": "1 1st and 2nd person pronouns 2 3rd person gendered pronouns 3 3rd person ungendered pronouns 4 Possessive pronouns 5 Reflexive pronouns 6 Indefinite pronouns 7 Demonstrative pronouns 8 Relative and interrogative pronouns 9",
"ref_id": "TABREF2"
},
{
"start": 661,
"end": 669,
"text": "Table 2b",
"ref_id": null
},
{
"start": 717,
"end": 725,
"text": "Table 2a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "2.1"
},
{
"text": "A few points deserve mention. First, approximately 80% of the erroneous mentions belong to Group 10. This should perhaps not be surprising given that the extraction of non-pronominal mentions, which are often composed of multiple tokens, is typically more challenging than that of pronouns. Note that a complication involved in the extraction process concerns the detection of nonreferring mentions: according to the shared task guidelines, any non-referring mention extracted will be considered erroneous. The second largest group is Group 3, whose mentions account for 6.2% of the number of erroneous mentions. This again should not be surprising. This group is composed of pronouns such as \"it\", many of which may not be referring because of its use as an expletive or a pleonastic pronoun. Second, considering the \"Overall\" row, we can see that UTD has the smallest coverage of erroneous mentions, which translates to a higher mention extraction precision., whereas KU has the highest coverage of erroneous mentions. While Table 2a shows that Emory and KU both achieve high mention extraction recall, Table 2b shows that KU does so at the expense of precision and that Emory is clearly superior to KU for mention extraction. While DFKI's coverage of gold mentions is the lowest among the four systems, its coverage of erroneous mentions is relatively high. Third, considering the groupspecific results, we can get a better idea of what makes a particular system better for mention extraction. UTD extracts fewer erroneous mentions than other teams except for Groups 4 and 7, both of which are relatively small. By contrast, KU extracts considerably more erroneous possessive pronouns (Group 4) than other teams, Emory extracts considerably more erroneous demonstrative pronouns (Group 7) than other teams, DFKI extracts more erroneous relative and interrogative pronouns (Group 8) than other teams, and both KU and DFKI extract considerably more erroneous indefinite pronouns (Group 6) and other pronouns (Group 9) than other teams. Finally, considering the \"only\" columns, we see that 10% of the erroneous mentions are only extracted by Emory, 18.8% by KU, and 20.7% by DFKI. This shows that the systems are quite different in terms of mention extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 1027,
"end": 1035,
"text": "Table 2a",
"ref_id": null
},
{
"start": 1105,
"end": 1113,
"text": "Table 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Group Description",
"sec_num": null
},
{
"text": "Next, we consider the coreference results. Table 4 shows the official results obtained using the official scorer. These results are expressed in terms of MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , and CEAF e (Luo, 2005) recall (R), precision (P), and F-score (F), as well as the CoNLL score, which is the unweighted average of the MUC, B 3 , and CEAF e F-scores. The four participating systems show a clear difference in performance in terms of CoNLL F-score: Emory performs the best, UTD and KU rank second and third respectively, and DFKI achieves the lowest performance. The performance difference between Emory and UTD is smaller compared to that between any other pair of systems: UTD underperforms Emory roughly by 0.7-6.6% in the CoNLL score. This could be explained in part by the fact that both systems were built upon coref-hoi, which is Xu and Choi's (2020) entity coreference system. Nevertheless, as we will see below, the two systems behave quite differently.",
"cite_spans": [
{
"start": 158,
"end": 179,
"text": "(Vilain et al., 1995)",
"ref_id": "BIBREF19"
},
{
"start": 186,
"end": 211,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 225,
"end": 236,
"text": "(Luo, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "Next, we consider the performance of these systems w.r.t. each scorer. As a link-based metric, MUC focuses on link identification and does not reward successful identification of singleton clusters. Hence, by looking at the MUC recall, we can get a better idea of how well a resolver does in terms of link identification. As can be seen, Emory and UTD are substantially better than KU and DFKI in terms of link identification. To gain a better understanding of the extent to which the singleton clusters and the non-singleton clusters contribute to the overall performance of the systems, we show the corresponding results in Table 5 . Specifically, in the \"F\" columns we show the CoNLL score. The \"ns-F\" columns show the CoNLL scores obtained by removing the singleton clusters from the output prior to scoring, meaning that the scorers are applied to score only the non-singleton clusters. Similarly, the \"s-F\" columns show the CoNLL scores obtained by removing the non-singleton clusters from the output prior to scoring, effectively allow- ing the scorers to score only the singleton clusters. As we can see from the results in Table 5 , Emory and UTD achieve comparable performance w.r.t. both non-singleton and singleton cluster identification, except on the AMI dataset where Emory clearly demonstrates its superior performance w.r.t. both tasks. In addition, while Emory and UTD are generally better than KU w.r.t. both tasks, the difference stems more from non-singleton cluster identification than singleton cluster identification. Comparing KU and DFKI, we see that KU is better than DFKI on both tasks on all but the LIGHT dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 626,
"end": 633,
"text": "Table 5",
"ref_id": "TABREF6"
},
{
"start": 1132,
"end": 1139,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "Next, we measure system performance, specifically link identification performance, at the pairwise level. For each non-singleton coreference cluster in the gold output, we extract every pair of mentions in the cluster, and denote the set of pairs extracted from all non-singleton clusters as G p . We similarly extract all the pairs from the non-singleton clusters produced by each of the four systems, and denote the resulting sets as E p , T p , K p and D p . The recall, precision, and F-scores in Table 6 are computed based on the pairwise links in these sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 501,
"end": 508,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "From the \"Overall\" row in Table 6 , we can see that approximately 120K pairs can be extracted from the gold clusters of the four test datasets. As we can see, except for KU, all systems have higher recall than precision. In particular, UTD has the highest recall but the lowest precision. Comparing Emory and UTD, we see that while the two systems achieve comparable recall, UTD's precision is much lower, which in turn results in a lower Fscore. Though precision-oriented, KU's precision is not as high as Emory's, which has the highest precision among the systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "To gain a better understanding of how these systems perform w.r.t. identifying difficult vs. easy links, we divide the pairs into three groups based on an intuitive notion of hardness. Group A is composed of pairs where the two mentions are lexically identical. A pair appears in Group B if (1) both mentions in the pair are pronouns or (2) both mentions are non-pronominal and have a content word overlap. Finally, a pair appears in Group C if (1) the anaphor is pronominal but the antecedent is not or (2) the two mentions have no content word overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "Results are shown in the rows labeled A, B, and C. The easiest links (Group A) account for nearly half of all pairs while the hardest links (Group C) have a much lower representation, accounting for only 15% of all pairs. Emory achieves the best results in all three groups, indicating its robustness in identifying both easy and hard links. UTD ranks second in Groups A and B, and the performance gap between Emory and UTD widens as hardness increases. DFKI ranks third in Groups A and B, and largely fails to identify the links in Group C. Finally, while KU does poorly for Groups A and B, it performs slightly better than UTD w.r.t. Group C. We speculate that KU chooses to resolve only those pairs it is most confident about regardless of hardness, yielding a low recall but a higher precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "To understand how well each system does in resolving the anaphors in each of the 10 groups we defined earlier, we show the per-group results in the rows labeled 1 through 10. As can be seen, the links involving 1st or 2nd pronouns as anaphors (Group 1) form the largest group, accounting for 70% of all links. This is followed by links involving non-pronominal mentions (Group 10) and possessive pronouns (Group 4), both of which account for slightly more than 10% of all links. Emory achieves the best performance on these three largest groups. Comparing Emory and UTD, we can see that the two systems are indeed different: Emory achieves a higher precision than UTD on all 10 groups, and its precision and recall are both higher than UTD's on Groups 6, 7, and 10. Comparing KU and DFKI, we see that DFKI outperforms KU in resolving 1st and 2nd pronouns (Group 1), possessive pronouns (Group 4) and reflexive pronouns (Group 5). Though achieving the lowest overall performance, KU outperforms UTD in resolving 3rd person ungendered pronouns (Group 3), indefinite pronouns (Group 6), demonstrative pronouns (Group 7), relative and interrogative pronouns (Group 8), and other pronouns (Group 9). The remaining rows of the table show the results when each of the ten groups is further subdivided into three groups based on the three levels of hardness. Space limitations preclude a discussion of these results, however. Table 7a shows each system's coverage of gold coreferent pairs. The rows can be interpreted in the same way as those in Table 6 , whereas the columns can be interpreted in the same way as those in Table 2. As a quick reminder, \"none\" shows the percentage of gold pairs that are not extracted by any of the four systems; \"E\", \"T\", \"K\", and \"D\" show the percentage of gold pairs extracted by each of the four systems; and the \"x-only\" columns show the percentage of gold pairs that are extracted only by system x.",
"cite_spans": [],
"ref_spans": [
{
"start": 1418,
"end": 1426,
"text": "Table 7a",
"ref_id": "TABREF10"
},
{
"start": 1538,
"end": 1545,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "A few points deserve mention. First, consider the \"none\" results. As can be seen, only 10.5% of the links are not recovered by any of the four systems. Taking into account link hardness, we see that 31.9% of the hardest links (Group C) are not extracted while only 4.9% of the easiest links (Group A) are not extracted. These results provide suggestive evidence that our intuition notion of link hardness is consistent with what a resolver would perceive as hard. Considering the per-group results (Group i, where 1 \u2264 i \u2264 10), approximately 65% of the links involving indefinite pronouns (Group 6) and relative and interrogative pronouns (Group 8), 47% of the links involving demonstrative pronouns (Group 7), and 29% of the links involving 3rd person ungendered pronouns (Group 3) and nonpronominal mentions (Group 10) are missed by all four systems. These are traditionally the harder groups of anaphors to resolve. Second, consider the \"x-only\" results. While the \"Overall\" results suggest that Emory and UTD may not very different from each other in terms of the links they recover, the \"x-only\" results suggest otherwise. Specifically, 18% of the hardest links (Group C) and 16.6% of the non-pronominal links (Group 10) are uniquely identified by Emory, whereas 15.1% of the links involving 3rd person gendered pronouns (Group 2) are uniquely identified by UTD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "While Table 7a focuses on the extraction of gold coreferent pairs, Table 7b focuses on pairs that are erroneously posited as coreferent. Specifically, we take the union of the set of pairs that are erroneously posited as coreferent by the four resolvers and compute statistics based on the resulting set, which we refer to as S. The columns in Table 7b can be interpreted in the same way as those in Table 2 . For instance, E, T, K, and D show the percentage of pairs in S extracted by each of the systems, and the \"x-only\" columns show the percentage of pairs in S that are extracted only by system x.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 14,
"text": "Table 7a",
"ref_id": "TABREF10"
},
{
"start": 67,
"end": 75,
"text": "Table 7b",
"ref_id": "TABREF10"
},
{
"start": 344,
"end": 352,
"text": "Table 7b",
"ref_id": "TABREF10"
},
{
"start": 400,
"end": 407,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "As can be seen in Table 7b , approximately 19.4K erroneous links are established by the four systems, of which 60.2% are established by UTD, 37.7% by DFKI, 32% by Emory, and 13.8% by KU; in addition, 32.4% of these erroneous links are only established by UTD and 20.4% are only established by DFKI. Combining the results in Tables 7a and 7b, we see that UTD is the most aggressive among the four systems in link identification: it has the highest recall but the lowest precision, which is consistent with the results in Table 6 , and a large percentage of erroneous links are only established by UTD. In fact, a closer examination of the results reveals that the percentage of erroneous links established by UTD is higher than that by any other system w.r.t. each of the ten groups and each of the three hardness groups. Furthermore, recall from Table 7a that Emory manages to correctly extract many hard links (Group C) and non-pronominal links (Group 10), but from Table 7b we can see that this success comes at the expense of extracting a fairly large number of erroneous links in these groups. Finally, the \"only\" columns show that the errors made by the four systems are quite different from each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Table 7b",
"ref_id": "TABREF10"
},
{
"start": 520,
"end": 527,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 846,
"end": 854,
"text": "Table 7a",
"ref_id": "TABREF10"
},
{
"start": 967,
"end": 975,
"text": "Table 7b",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "2.2"
},
{
"text": "In this subsection, we manually analyze the pairwise links that are correctly and incorrectly established by the participating systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "We begin by examining the coreference links that are not identified by any system. The mention pairs that are most frequently missed are: (\"I\", \"I\"), (\"I\", \"you\"), (\"you\", \"I\"), and (\"it\", it\"), where the first mention in each pair is the anaphor and the second mention is its antecedent. It should perhaps not be surprising that these pairs all involve links between pronouns given their prevalence in spoken dialogues. More than 3000 instances of these four mention pairs are not extracted by any of the participating systems. The other major types of missing links involve demonstrative pronouns, wh-pronouns, and one-anaphora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "As for missing links that involve non-pronominal expressions, some appear to be easy to identify as the two mentions involved are synonyms, such as (\"the super market\", \"the grocery store\"), (\"the school\", \"the college\"), and (\"kids\", \"children\"). Since we do not examine the context in which they appear or consider how far apart they are from each other, we cannot conclude why these seemingly simple cases are being missed by all systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "Some missing links are difficult to identify because the two mentions involved appear to have different semantic types. Examples include: (\"your destination\", \"the king of the goolehops\"), (\"your donation\", \"the organization\"), (\"this very reputable charity\", \"you\"), (\"this school your son goes to\", \"private\"), (\"the battery\", \"the standard\"), (\"sixty-six\", \"the street\"), and (\"Michael\", \"a\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "There are links that are missed because one or both of the mentions involved are simply not extracted. Examples include: (\"your husband\", \"a wonderful man, you know, who treats me very, you know, with a, with as\") and (\"your grandfathers of past\", \"the kings of old: my great-grandfather kind leonidas, the pious baylor the blessed, and maegor the cruel\"). Note that the antecedents in these examples are very long and certainly pose challenges to a mention detection system. Next, we take a closer look at the top-performing system, the Emory system, in an attempt to understand why it works better than the other systems. First, there are more than 1700 links in the AMI test corpus that involve \"well\" (i.e., \"we'll\") and \"were\" (i.e., \"we're\") and are correctly identified by Emory but not the other systems. In fact, the large discrepancy in resolution performance between Emory and UTD on AMI can primarily be attributed to UTD's failure to even extract \"well\" and \"were\" as mentions (probably because of the missing apostrophe). Second, Emory appears to be better than the other systems at exploiting context to determine when two mentions are coreferent. Consider a lexical pair that appears frequently in the data such as (\"I\", \"I\"). While many instances of it are coreferent, there are also many instances that are not coreferent. The coreferent and noncoreferent instances can only be distinguished by factors such as distance and the surrounding context. While Emory and UTD achieve similar recall numbers, Emory achieves higher precision scores because of its ability to better exploit context to distinguish the coreferent and non-coreferent cases of a frequently-occurring lexical pair than UTD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "We also examine the erroneous links established by Emory. A major type of error involves links between nouns that are synonymous or semantically similar. Examples include (\"contrast\", \"brightness\") and (\"the cash\", \"budget\"). Another major type of error involves the frequently occurring lexical pairs discussed in the previous paragraph: while Emory is comparatively better than the other systems in exploiting context to distinguish the coreferent and non-coreferent instances, it is still far from being perfect in doing so. For instance, while it correctly identifies more than 1700 links involving \"well\" and \"were\", it incorrectly identifies more than 1500 links involving these two pronouns. Determining how to effectively exploit context to distinguish the coreferent instances from their noncoreferent counterparts is by no means trivial, but it is a problem that must be addressed in order to bring entity coreference resolvers to the next level of performance. Table 8 shows examples of the most frequent gold coreferent pairs in the four test sets as well as the predictions made by the four systems on these pairs. Specifically, the first block shows the results of the five most frequent coreferent pairs. Perhaps not surprisingly, each pair involves two pronouns. The \"count\" column shows the number of times the two pronouns are coreferent in the test data. In addition, we show the number of times each system correctly predicts each pair as coreferent as well as the number of times each system incorrectly predicts each pair as coreferent. As can be seen, Emory and UTD establish a lot more correct links but also a lot more erroneous links than KU and DFKI. Moreover, the number of wrong links posited by Emory is considerably smaller than that by UTD. These results provide empirical support for our earlier claim that determining how to effectively exploit context to distinguish the coreferent instances of a frequent pair from their non-coreferent counterparts is a challenging but important problem for coreference researchers.",
"cite_spans": [],
"ref_spans": [
{
"start": 972,
"end": 979,
"text": "Table 8",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "The second block shows results involving pairs in which one is a pronominal mention and the other is a non-pronominal mention, whereas the last block shows results involving pairs in which both mentions are non-pronominal. The first two blocks of results are similar in terms of the ob- Most frequent pairs i i 34534 31110 13677 32175 17729 8120 5420 29747 6515 i you 8500 7053 3484 6866 7438 2697 2120 2607 7056 we we 8148 5957 8829 8049 12885 1412 264 7858 12793 you i 7472 6155 3778 6113 7676 2642 2131 3402 10122 you you 4713 3996 1647 3962 4241 2304 1557 4148 servations we can draw. The last block of results indicate how challenging it is to correctly establish links between two non-pronominal mentions.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 618,
"text": "Most frequent pairs i i 34534 31110 13677 32175 17729 8120 5420 29747 6515 i you 8500 7053 3484 6866 7438 2697 2120 2607 7056 we we 8148 5957 8829 8049 12885 1412 264 7858 12793 you i 7472 6155 3778 6113 7676 2642 2131 3402 10122 you you 4713 3996 1647 3962 4241 2304 1557 4148",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "2.3"
},
{
"text": "The shared task divides the evaluation of bridging resolution into two phases: (1) the Predicted phase, where a system needs to first identify all of the entity mentions that likely correspond to anaphors and antecedents, then perform bridging resolution on the predicted mentions; and (2) the Gold phase, which is essentially the same as the Predicted phase except that bridging resolution is performed on the given gold mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bridging Resolution",
"sec_num": "3"
},
{
"text": "In this section, we analyze the performance of the teams that participated in the bridging resolution track. The UTD team (Kobayashi et al., 2021) and the KU team (Kim et al., 2021) participated in both phases, whereas the INRIA team (Renner et al., 2021) only participated in the Gold phase. In other words, two teams participated in the Predicted phase, and three teams participated in the Gold phase. We will use their team name to refer to the bridging resolution systems they developed. To make it clear which phase a system was developed for, we will augment the team name with a superscript that encodes the phase. For instance, we will use UTD P and UTD G to refer to the systems the UTD team developed for the Predicted phase and the Gold phase respectively.",
"cite_spans": [
{
"start": 122,
"end": 146,
"text": "(Kobayashi et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 163,
"end": 181,
"text": "(Kim et al., 2021)",
"ref_id": "BIBREF6"
},
{
"start": 234,
"end": 255,
"text": "(Renner et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bridging Resolution",
"sec_num": "3"
},
{
"text": "The \"Recognition\" rows in Table 9 show the official anaphor extraction results on each of the four test sets, where results are expressed in terms of recall (R), precision (P), and F-score. An anaphor is considered correctly detected if it has an exact match with a gold bridging anaphor in terms of boundary. Comparing the two systems developed for the Predicted phase, we see that KU P beats UTD P on three datasets, LIGHT, AMI, and Switchboard. Perhaps impressively, on these three datasets, KU P outperforms UTD P in terms of both precision and recall, showing its firm superiority over UTD P . Note that KU P is a pipelined system where anaphor extraction is performed as an explicit step prior to resolution, whereas UTD P is a span-based system where the spans corresponding to anaphors are jointly learned as part of the resolution process. These results seem to suggest that better results can be achieved if one designs a model specifically for anaphor extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Anaphor Extraction",
"sec_num": "3.1"
},
{
"text": "Among the three Gold systems, UTD G outperforms KU G on all datasets, and KU G in turn outperforms INRIA G on all datasets. One difference between UTD G and UTD P is that the former has an explicit anaphor extraction component whereas the latter does not. The better anaphor extraction results achieved by UTD G in comparison to KU G could be therefore be attributed to the introduction of this anaphor extraction component, providing further empirical support for our earlier hypoth- esis that better anaphor extraction results could be achieved via an anaphor extraction component. While one would generally expect to see better anaphor extraction performance in the Gold phase than in the Predicted phase, it is interesting to see that this is not necessarily the case for KU. Specifically, except on Persuasion where KU G achieves considerably better performance than KU P , the two systems achieve similar F-scores on the remaining three datasets. While their F-scores are similar, the recall and precision scores are not: KU P has dramatically higher recall and substantially lower precision than KU G . INRIA P is roughly on par with the other systems in terms of precision, but its recall is much lower than the other systems: the best recall it achieves on any of the datasets is only 24.9%. This level of extraction performance will likely limit its resolution performance severely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphor Extraction",
"sec_num": "3.1"
},
{
"text": "Tables 10 and 11 show each system's coverage of correct and wrong anaphors in the Predicted phase and the Gold phase respectively. The rows and columns in these tables can be interpreted in the same way as those in Table 2 . As a quick reminder, \"none\" shows the percentage of gold anaphors that are not extracted by any of the systems, and the \"x-only\" columns show the percentage of correct/wrong anaphors extracted by system x and not any of the other systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Anaphor Extraction",
"sec_num": "3.1"
},
{
"text": "Consider first the left half of Table 10 , which shows each Predicted system's coverage of gold anaphors. As we can see, 40.4% of the gold anaphors are not extracted by any of the two systems. More specifically, 38.4% of the nonpronominal anaphors (the largest group), 50.7% of the 1st+2nd person pronouns (one of the second largest groups) and 67.6% of the 3rd person ungendered pronouns (the other second largest group) are not extracted by any of them. These results suggest that anaphor extraction in the Predicted phase is rather challenging. In terms of the coverage of gold anaphors, the \"x-only\" columns show that the two Predicted systems are quite different: while many of the anaphors extracted by KU P are not extracted by UTD P , there are also a number of anaphors that are extracted by UTD P but not by KU P . Table 10 shows each Predicted system's coverage of mentions that are erroneously extracted as anaphors. As can be seen, KU P extracts nearly half of the erroneously extracted anaphors, whereas the corresponding percentage for UTD P is slightly lower (38.9%). The number of mistakes uniquely made by KU P is larger than that by UTD P on all but Groups 4 (possessive pronouns) and 7 (demonstrative pronouns). These results suggest that the two systems are very different from each other in terms of anaphor extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Table 10",
"ref_id": "TABREF17"
},
{
"start": 825,
"end": 833,
"text": "Table 10",
"ref_id": "TABREF17"
}
],
"eq_spans": [],
"section": "Anaphor Extraction",
"sec_num": "3.1"
},
{
"text": "Next, consider the three Gold systems in Table 11. Somewhat surprisingly, when the gold mentions are provided, the percentage of anaphors that cannot be extracted by any of the three systems increases for all but two groups (Group 10 (non-pronominal mentions) and Groups 6 (indefinite pronouns)) in comparison to the corresponding percentages in the Predicted phase. The remaining columns provide suggestive evidence that the three systems are quite different from each other in terms of anaphor extraction. Specifically, among the gold anaphors, 20.2% are extracted only by UTD G , 12.3% only by KU G , and 3.5% only by INRIA G , and among the erroneously extracted anaphors, 37.5% are only extracted by UTD G , 26.4% only by UTD G , and 12.3% only by INRIA G .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The right half of",
"sec_num": null
},
{
"text": "The \"Resolution\" rows in Table 9 show the official resolution results on each of the four test sets, where results are expressed in terms of recall (R), precision (P), and F-score at the entity level. In other words, an anaphor is considered correctly resolved if it is resolved to its antecedent or any preceding mention that is coreferent with its antecedent. Comparing the two Predicted systems, we see that UTD P beats KU P on all datasets in terms of recall, precision, and F-score. Given that UTD P 's anaphor extraction performance is poorer than that of KU P , its superior resolution results can be attributed solely to better resolution and not better anaphor extraction. We speculate that KU P 's poorer resolution performance can be attributed to its attempt to establish links, many of which are wrong, more aggressively than UTD P . Among the three Gold systems, UTD G outperforms KU G on all datasets in terms of F-score, and with the exception of its precision on Switchboard, it outperforms KU G on both recall and precision on all datasets. It is worth noting that both Gold systems outperform their Predicted counterparts on all datasets, which is consistent with our expectation that the Gold setting is easier than the Predicted setting. INRIA G 's performance is much lower than the other two systems in terms of both recall and precision, but primarily because of recall. We believe that its low recall can be attributed in part to its fairly poor anaphor extraction performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 9",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "To gain a better understanding of how these systems perform w.r.t. identifying difficult vs. easy links, we divide the bridging pairs based on how hard we believe it is to resolve them. Specifically, we partition the bridging pairs into five groups: same string (the two mentions are the same string), same head (the two mentions have the same head), same head lemma (the two mentions have the same lemma head), word overlap (the two mentions have at least one content word overlap), and other (pairs that have no lexical overlap). Two points deserve mention. First, the groups are listed in ascending order of resolution difficulty: intuitively, a pair of mentions having the same head lemma is easier to resolve than one that does not have any lexical overlap, for instance. Second, if a pair belongs to a group (e.g., same head lemma), then it should also belong to all subsequent groups (i.e., word overlap and other), but since we are partitioning the pairs, we will assign a pair to only the first group that is applicable to it. Tables 12a and 13a show the results for the Predicted systems and the Gold systems respectively. The rows correspond to the five groups. The columns can be interpreted in the same way as those in Table 7a . \"all\" expresses the size of a group in terms of the number of pairs it covers and the corresponding percentage. \"none\" shows the number and percentage of pairs that are not resolved by any of the systems. The \"T\", \"K\", and \"I\" columns show the number and percentage of pairs correctly resolved by the individual systems, whereas the \"x-only\" columns show the number and percentage of pairs that can be resolved by system x and not any of the other systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 1036,
"end": 1054,
"text": "Tables 12a and 13a",
"ref_id": "TABREF2"
},
{
"start": 1232,
"end": 1240,
"text": "Table 7a",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "We begin by noting that the percentage of anaphors that belong to the \"same string\" group is much smaller in bridging than in coreference. This is understandable: this group contains pairs in which the mentions are lexically identical. While many coreferent mentions are lexically identical, relatively few bridging pairs are composed of lex-ically identical mentions. In addition, approximately only 30% of the links (i.e., those that connect the pairs in the first four groups) can be established using string-matching facilities. This makes bridging resolution more challenging than entity coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "Next, consider the Predicted systems. As we can see in the \"overall\" row, the overall resolution performance of KU P is worse than that of UTD P . However, this by no means implies that UTD P is better than KU P in all categories. Generally, UTD P performs much better than KU P on the easier categories, and as we move down the table, the performance gap between the two systems continues to shrink, to the point that KU P starts to outperform UTD P on \"other\", the most difficult category. Overall, these results suggest that while UTD P is better than KU P at resolving the easierto-resolve pairs, the reverse is true when it comes to resolving the difficult-to-resolve pairs. Considering the results in the \"x-only\" columns, we see that the links recalled by the two systems are largely different from each other. Finally, considering the results in the \"none\" column, we note that 77.4% of the links are not recalled by any of the two systems. Even for simpler categories such as \"same head\" and \"same head lemma\", around 60% of the pairs are not recalled. These results provide suggestive evidence that the Predicted setting is indeed very challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "Consider the Gold systems. A few points deserve mention. First, the performance differences that we have observed above between UTD P and KU P are also applicable to UTD G and KU G , except that UTD G outperforms KU G on all categories. In other words, UTD G manages to do better than KU G on the difficult categories. Second, INRIA G underperforms UTD G and KU G on all but the easiest group, \"same string\". Third, considering the results in the \"x-only\" columns, we see that the links recalled by UTD G and KU G are quite different, but the links recalled by INRIA G are for the most part similar to those recalled by the other two systems. Finally, considering the results in the \"none\" column, we see that 70.2% of the links are not recalled by any of the three systems, which is smaller than the corresponding percentage in the Predicted setting. In fact, the percentage associated with nearly every group in the Gold phase is smaller than the corresponding percentage in the Predicted phase. This again provides suggestive evidence that the Gold setting is indeed less challenging than the Predicted setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "While Tables 12a and 13a focus on the extraction of gold pairs, Tables 12b and 13b focus on the extraction of wrong pairs (i.e., wrong links). Specifically, in Table 12b , we take the union of the set of wrong pairs extracted by the two Predicted systems and compute statistics based on the resulting set, which we refer to as S. The columns in Table 12b can be interpreted in the same way as those in Table 12a. Table 13b is essentially the same as Table 12b except that it shows the results of the three Gold systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 82,
"text": "Tables 12a and 13a focus on the extraction of gold pairs, Tables 12b and 13b",
"ref_id": "TABREF2"
},
{
"start": 160,
"end": 169,
"text": "Table 12b",
"ref_id": "TABREF20"
},
{
"start": 345,
"end": 354,
"text": "Table 12b",
"ref_id": "TABREF20"
},
{
"start": 402,
"end": 422,
"text": "Table 12a. Table 13b",
"ref_id": "TABREF2"
},
{
"start": 450,
"end": 459,
"text": "Table 12b",
"ref_id": "TABREF20"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "Consider first the Predicted systems (Table 12b ). As can be seen, 6604 erroneous links are established by the two systems, of which 44.8% are established by UTD P and 57.6% by KU P ; in addition, 42.4% of these erroneous links are only established by UTD G and 55.2% are only established by UTD P . Comparing Tables 12a and 12b, we see that each system identifies a lot more erroneous links than correct links. Moreover, while Table 12a shows that UTD P performs better than KU P on the easy categories and worse than it on the harder categories, Table 12b shows the reverse trend. Specifically, UTD P extracts more erroneous pairs that belong to the easy categories than KU P , whereas KU P extracts more erroneous pairs that belong to the harder categories than UTD P . Finally, considering the \"x-only\" columns, we see that there is very little overlap in the erroneous links predicted by the two systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 47,
"text": "(Table 12b",
"ref_id": "TABREF20"
},
{
"start": 428,
"end": 437,
"text": "Table 12a",
"ref_id": "TABREF20"
},
{
"start": 548,
"end": 557,
"text": "Table 12b",
"ref_id": "TABREF20"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "Next, consider the Gold systems (Table 13b ). We can see that 5400 erroneous links are established by the three systems, which is smaller than the corresponding number in Table 12b . This again suggests that the provision of gold mentions has made the task somewhat easier. Of these 5400 erroneous links, 49.5% are established by UTD G , 34.9% by KU G , and 23.0% by INRIA G ; in addition, 43.2% of these erroneous links are only established by UTD G , 30.2% are only established by UTD G , and 19.6% are only established by INRIA G . Some of the observations we made on the Predicted results in instance, UTD G extracts a lot more erroneous pairs that belong to the easy categories than KU P . What is different is that UTD G extracts more erroneous pairs than KU G on the hard categories as well, even though the difference in the number of erroneous pairs they extract is smaller as the hardness level increases. INRIA G generally extracts fewer erroneous pairs than the other two systems, but it extracts more erroneous pairs in the \"word overlap\" group than the other systems. Finally, considering the \"x-only\" columns, we see that the three systems are quite different from each other in terms of their prediction of erroneous pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 42,
"text": "(Table 13b",
"ref_id": "TABREF2"
},
{
"start": 171,
"end": 180,
"text": "Table 12b",
"ref_id": "TABREF20"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "3.2"
},
{
"text": "In this subsection, we manually analyze the pairwise links that are correctly and incorrectly established by the participating systems. A bridging relation in this data typically involves two mentions where one is a specific instance of a generic concept referred to by the other via the use of semantic relations such as set-subset, part-whole, and is-a. There are no noticeable differences between the Predicted systems and the Gold systems in terms of the kind of semantic relations they extract. Specifically, both the Predicted systems and the Gold systems are able to extract a variety of semantic relations as bridging relations, such as is-a (e.g., (\"an eagle\", \"bird\"), (\"a light blue\", \"standard color\")), set-subset (e.g., (\"I\", \"we\"), (\"charities\", \"Save the Children\")) and part-whole (e.g., (\"the engine\", \"the car\"), (\"New Orleans\", \"the United States\")). In addition, both are able to extract less well-defined relations in which one mention is simply associated with or is a specific instance of the other (e.g., (\"one chip\", \"two\"), (\"a child\", \"kid\"), (\"people\", \"customers\"), (\"it\", \"a new one, the phone\"), (\"the place\", \"a restaurant\"), (\"some\", \"animals\"), (\"That\", \"a special flower to show you\"), (\"dresses\", \"Levi's), (\"people who have killed police officers\", \"murderers\")). Additional examples that are successfully recalled by the systems are shown in Table 14 . Specifically, the \"Recalled\" column shows successfully recalled pairs that are instances of different semantic relations or categories as well as the system(s) that identified these pairs. As discussed before, while some relations can be identified via string-matching facilities (e.g., (\"the function\", \"a desired function\"), (\"two batteries\", \"battery\")), the majority of them cannot.",
"cite_spans": [],
"ref_spans": [
{
"start": 1381,
"end": 1389,
"text": "Table 14",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "In addition, we do not observe any noticeable differences between the systems submitted by different teams in terms of the kinds of semantic relations they extract. As discussed before, the UTD systems achieve better resolution results than the KU systems because (1) the UTD systems are more conservative in positing bridging links between two mentions than the KU systems, and (2) the UTD systems focus more on the relations that can be identified via string-matching facilities, which presumably are easier to identify.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "Examining the links that are missed by all of the systems, we do not find any noticeable differences between the kinds of semantic relations that exist in the correctly extracted pairs and those that exist in the pairs that fail to be extracted. Table 14 shows several gold pairs that are not extracted by any of the systems in the \"Missed\" column. While a deeper analysis is needed in order to understand why some instances of a particular semantic relation are being extracted and others are not, we speculate that the distance between the two mentions involved, their surrounding contexts, and whether a given mention pair is seen in the training data as having a bridging relation may have played a role.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 254,
"text": "Table 14",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "While it is encouraging to see from Table 14 that the systems can successfully recall pairs that belong to the \"Other\" (i.e., most difficult-to-resolve) category, a large percentage of bridging links in this category are still not extracted by any of the systems. To improve system recall for these difficult cases, existing work has explored the use of information extracted from manually constructed resources such as knowledge graphs (Pandit et al., 2020) as well as bridging pairs (Hou, 2018) and bridging-annotated data (Hou, 2020) automatically constructed using lexico-syntactic patterns. These and other ideas (see Kobayashi and Ng (2020) for an overview) could be explored to improve the recall of the participating systems. Note, however, that these manually and automatically constructed resources are not likely to be helpful for resolving bridging links that involve pronouns as well as nouns that are semantically poor (e.g., \"here\"). Currently, demonstrative pronouns such as \"this\" and \"that\" have a resolution recall of 14.2% and 4.5% respectively, and the word \"here\" has a resolution recall of 6.3%. Given that these anaphors cannot be resolved via string matching, the only way to resolve them is to exploit the contexts in which they appear. While the mention representations acquired by existing mechanisms are supposed to be contextualized, the contextual information encoded in them is arguably quite limited and insufficient as far as making accurate linking decisions is concerned. Hence, learning effective context representations is a key challenge for state-of-the-art bridging resolvers. As discussed earlier, learning effective context representations is also an issue surrounding state-of-the-art entity coreference resolvers. However, we believe that this issue is likely to be more challenging for bridging resolution than for entity coreference. The reason is that determining whether two mentions are associated based on context is in general a lot more challenging than determining whether they refer to the same entity.",
"cite_spans": [
{
"start": 437,
"end": 458,
"text": "(Pandit et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 485,
"end": 496,
"text": "(Hou, 2018)",
"ref_id": "BIBREF2"
},
{
"start": 525,
"end": 536,
"text": "(Hou, 2020)",
"ref_id": null
},
{
"start": 623,
"end": 646,
"text": "Kobayashi and Ng (2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Table 14",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3"
},
{
"text": "The shared task divides the evaluation of discourse deixis resolution into two phases: (1) the Predicted phase, where a system needs to first identify all of the entity mentions that likely correspond to anaphors and antecedents, then perform discourse deixis resolution on the predicted mentions; and (2) the Gold phase, which is essentially the same as the Predicted phase except that the mentions corresponding to anaphors are to be extracted from the given gold mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Deixis Resolution",
"sec_num": "4"
},
{
"text": "In this section, we analyze the performance of the teams that participated in the discourse deixis resolution track. The UTD team (Kobayashi et al., 2021) participated in both phases, whereas the DFKI team (Anikina et al., 2021) participated in the Predicted phase only. In other words, two teams participated in the Predicted phase, and one team participated in the Gold phase. As in bridging, we will use their team name to refer to the discourse deixis resolution systems they developed, augmenting the team name with a superscript that encodes the phase the system was developed for.",
"cite_spans": [
{
"start": 130,
"end": 154,
"text": "(Kobayashi et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 206,
"end": 228,
"text": "(Anikina et al., 2021)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discourse Deixis Resolution",
"sec_num": "4"
},
{
"text": "Mention extraction results of the four test sets, which are expressed in terms of R, P, and F, are shown in the \"Overall\" row of Table 15 . As in the other tracks, a mention is considered correctly extracted if it has an exact match with a gold mention in terms of boundary.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 137,
"text": "Table 15",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "Consider first the two systems developed for the Predicted phase, UTD P and DFKI P . DFKI P achieves a much higher recall (9-17% points) than UTD P on three of the four datasets, and on the remaining dataset (LIGHT), the two systems achieve comparable recall. These results suggest that DFKI P is a lot more aggressive in extracting mentions than UTD P . The high recall scores achieved by DFKI P , however, come at the expense of precision. As can be seen, DFKI P 's precision scores are substantially lower than UTD P 's, with a difference of 37-42% points. Consequently, the mention extraction F-scores achieved by DFKI P are also lower than those achieved by UTD P : there is a 12-22% point difference in F-score between the two systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "Next, consider the two UTD systems, UTD P and UTD G . The performance difference between these two systems is less than that between the two systems for the Predicted phase. In terms of F-score, while UTD G outperforms UTD P by nearly 12% points on Persuasion, the two differ from each other by only 2-3% points on the remaining datasets. The difference between their recall and precision values, however, provides some evidence that they may not be as similar to each other as their F-scores suggest. Specifically, UTD G 's mention extraction system seems to be recall-oriented: on three of the four datasets, UTD G has a much higher recall (6-26% points) but a much lower precision (6-16% points) than UTD P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "To better understand whether these systems dif- fer in terms of how well they extract anaphors and antecedents, we also show in the last two rows of Table 15 their results on anaphor and antecedent extraction. Since each mention in the test sets is annotated as \"anaphor\" or\"antecedent\", we can easily compute recall. However, since the systems did not label each of the extracted mentions as \"anaphor\" or \"antecedent\" in the outputs, we cannot compute precision. As can be seen, DFKI P extracts more mentions as antecedents and anaphors than U T D P on all datasets, with the exception on LIGHT and Switchboard, where U T D P achieves better recall on anaphor extraction. Comparing UTD P and UTD G , we can see that while UTD G lags behind UTD P by 3-9% points in anaphor extraction on LIGHT and AMI, UTD G achieve superior mention extraction performance to UTD P in the remaining cases. In particular, the difference between their recall in antecedent extraction is much bigger than that in anaphor extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Table 15",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "To gain additional insights into the difference between the systems w.r.t. mention extraction, we show in Table 16a their performance on extracting the five gold mentions that occur most frequently on the four test sets combined. More specifically, in the \"mention\" column, \"overall\" shows the results aggregated over all gold mentions, \"The rest\" aggregates the results of all but the top five gold mentions, and the remaining rows show the results of each of the top five gold mentions. The \"%\" and \"count\" columns show the percentage and number of gold mentions that belong to each category. The remaining columns can be interpreted in the same way as those in Table 2 . For instance, \"none\" shows the percentage of gold mentions that cannot be extracted by any of the three systems, UTD G shows the percentage of gold mentions extracted by UTD G , and UTD G -only shows the percentage of gold mentions that are extracted by UTD G but not by the other two systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 115,
"text": "Table 16a",
"ref_id": "TABREF8"
},
{
"start": 664,
"end": 671,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "Perhaps not surprisingly, the most frequent categories of mentions are all anaphor categories. Specifically, the most frequent category is \"that\", accounting for 29.3% of the gold mentions. This is followed by \"it\" (6.9%) and \"this\" (1.8%). These three categories of anaphors account for nearly 38% of the gold mentions in the test data. From the \"none\" column, we see that the worst-performing category is \"The rest\". This should perhaps not be surprising either: the majority of the mentions in this category are antecedents, which may not have appeared in the training data at all. In addition, 55.8% of the anaphor \"it\" were missed by the systems. We believe that this can be attributed to two reasons. First, DFKI P gave up on handling \"it\". Second, we speculate that it is challenging to determine whether \"it\" is used deictically: while \"this\", \"that\" and \"it\" can all be used as coreferent anaphors, bridging anaphors, and deictic expressions, it is more likely for \"it\" to be a coreferent or bridging anaphor than a discourse deixis compared to \"this\" and \"that\". Moving on to the performance of the systems (UTD G , UTD P , and DFKI P ), we see that the systems are indeed better at extracting \"this\" and \"that\" than \"it\". Note that DFKI P 's recall scores on the anaphors that the system is able to handle are substantially higher than those of the UTD systems. In particular, considering the \"x-only\" columns, we see that there are many instances of these anaphors that are extracted only by DFKI P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "While Table 16a focuses on gold mention extraction, Table 16b focuses on the extraction of erroneous mentions. Specifically, we take the union of the set of erroneous mentions extracted by the three resolvers and compute statistics based on the resulting set, which we refer to as S. The columns in Table 16b can be interpreted in the same way as those in Table 16a . For instance, UTD G , UTD P , and DFKI P show the percentage of mentions in S extracted by each of the systems, and the \"xonly\" columns show the percentage of mentions in S extracted by exactly one of the systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 15,
"text": "Table 16a",
"ref_id": "TABREF8"
},
{
"start": 52,
"end": 61,
"text": "Table 16b",
"ref_id": "TABREF8"
},
{
"start": 299,
"end": 308,
"text": "Table 16b",
"ref_id": "TABREF8"
},
{
"start": 356,
"end": 365,
"text": "Table 16a",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "A few points deserve mention. First, the five most frequently occurring categories of mentions that are erroneously extracted are likely mentions that are incorrectly posited by the systems as discourse deixis, as the top four categories are the same as the top four categories of gold mentions. The erroneously extracted antecedents will likely appear in the \"The rest\" category. Second, DFKI P covers more than 90% of the mistakes made for the top categories of anaphors that it can handle (i.e., \"that\", \"this\", \"which\"), suggesting that the system is very aggressive in positing the occurrences of these words as anaphors. In contrast, it only extracts one-third of the mentions in the \"The rest\" category, which, as noted above, should mostly contain erroneously extracted antecedents, while UTD G and UTD P extract two-thirds and 3% of the erroneously extracted antecedents respectively. These results imply that UTD P is a lot more cautious in positing mentions as antecedents compared to the other two systems, while UTD G is the most liberal in positing mentions as antecedents. These differences can also be observed when considering the \"x-only\" columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mention Extraction",
"sec_num": "4.1"
},
{
"text": "A discourse deixis resolver is expected to output clusters, each of which contains a deixis and all of its antecedents. As far as scoring is concerned, discourse deixis resolution is viewed as a generalized case of event coreference, and hence the scorer used for scoring entity coreference chains is used to score the output of a discourse deixis resolver. Table 17 shows the official results obtained using the official scorer. Again, the results are expressed in terms of MUC, B 3 , and CEAF e R, P, and F, as well as the CoNLL score.",
"cite_spans": [],
"ref_spans": [
{
"start": 358,
"end": 366,
"text": "Table 17",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "A few points deserve mention. First, there is a clear difference in performance between the two systems developed for the Predicted phase, UTD P and DFKI P , in terms of CoNLL F-score, where the scores achieved by UTD P almost double those achieved by DFKI P . In comparison, the difference in the CoNLL score between UTD G and UTD P is smaller: less than 2% points on LIGHT and AMI, 12.5% points on Persuasion, and 5.0% points on Switchboard. We speculate that the particularly large difference in performance between the two systems on Persuasion can be attributed in part to mention extraction, where UTD G has a considerably higher mention extraction F-score on this dataset than UTD P (64.4 vs. 52.8). Second, DFKI P 's MUC recall scores are lower than the other systems for all but the Switchboard dataset. This implies that DFKI P is not able to recall as many links as the other systems. However, a closer examination of the results reveals that precision seems to play a bigger role in the observed performance difference between DFKI P and the other systems: DFKI P 's precision scores are generally very poor. We speculate that this can be attributed to the fact that the system is overly aggressive in positing \"that\", \"this\", and \"which\" as anaphors and attempts to resolve them, which in turn yields a lot of erroneous links. Third, comparing UTD G and UTD G , we see that the considerably better results achieved by UTD G on Persuasion can be attributed to not only mention extraction but also resolution, as reflected in the 25.2% point difference in MUC recall between the two systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "To further our understanding of how the systems perform w.r.t. non-singleton cluster and singleton cluster identification, we show the corresponding results in Table 18 , whose rows and columns can be interpreted in the same manner as those in Table 5. Comparing the two Predicted systems, we see that while DFKI P is considerably worse than UTD P in non-singleton cluster identification, it performs slightly and consistently better than UTD P in singleton cluster identification. This again could be attributed to its being aggressive in extracting anaphors. Comparing the two UTD systems, we see that UTD G is always better than UTD P in singleton cluster identification, but UTD P is better than UTD G in non-singleton cluster identification on all but the Persuasion dataset. These results further reveal why UTD G performs substantially better than UTD P on Persuasion: UTD G are better than UTD P in both non-singleton and singleton cluster identification on Persuasion.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 168,
"text": "Table 18",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "Next, we measure system performance, specifically link identification performance, at the pairwise level. For each non-singleton cluster in the gold output, we extract every pair of mentions in the cluster. We similarly extract all the pairs from the non-singleton clusters produced by each of the three systems. The recall, precision, and F-scores in Table 19 are computed based on the pairwise links in these sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 360,
"text": "Table 19",
"ref_id": "TABREF15"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "From the \"Overall\" row in Table 19 , we can see that 504 pairs can be extracted from the gold clusters of the four test sets. As we can see, all systems have higher recall than precision. Comparing the two Predicted systems, we see that while DFKI P achieves lower overall recall and precision than UTD P , the difference in their recall scores is comparatively smaller than the difference in their precision scores. In particular, the two systems achieve the same recall in the resolution of \"that\", the most frequent anaphor, and the slightly lower overall recall achieved by DFKI P can be largely attributed to its decision of not resolving \"it\" and some other anaphors. Comparing the two UTD systems, we see that UTD G achieves better recall and precision than UTD P in resolving the top anaphors. Perhaps more interesting, while the two Predicted systems cannot resolve any of the anaphors in \"The rest\" category, UTD G manages to achieve a nonzero F-score on this category, though precision and recall are both low. Table 20a shows each system's coverage of gold pairs. The rows can be interpreted in the same way as those in Table 19 , whereas the columns can be interpreted in the same way as those in Table 7a . As a quick reminder, \"none\" shows the percentage of gold pairs that are not extracted by any of the three systems; \"UTD G \", \"UTD P \", and \"DFKI P \" show the percentage of gold pairs extracted by each of the three systems; and the \"x-only\" columns show the percentage of gold pairs that are extracted only by system x.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Table 19",
"ref_id": "TABREF15"
},
{
"start": 1022,
"end": 1031,
"text": "Table 20a",
"ref_id": "TABREF34"
},
{
"start": 1132,
"end": 1140,
"text": "Table 19",
"ref_id": "TABREF15"
},
{
"start": 1210,
"end": 1218,
"text": "Table 7a",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "A few points deserve mention. First, consider the \"none\" results. As can be seen, 46.0% of the links are not recovered by any of the three systems. In particular, 97.2% of the links involving the anaphors in the \"The rest\" category are not recovered. This is followed by \"it\", where 76.4% of the links involving \"it\" are not recovered. In contrast, the recovery rate is higher for \"that\", probably because of the larger representation of links involving \"that\" in the training set. Second, consider the \"x-only\" results, which suggest that the three systems are more different from each other than we may think. Examining the \"that\" links, we see that 16% are only identified by UTD G , 9.3% only by UTD P , and 8.7% only by DFKI P . Similarly for \"this\": 22.7% are only identified by UTD G , 13.6% only by UTD P , and 9.1% only by DFKI P . While Table 20a focuses on the extraction of gold pairs, Table 20b focuses on the extraction of wrong pairs (i.e., wrong links). Specifically, we take the union of the set of wrong pairs extracted by the three resolvers and compute statistics based on the resulting set, which we refer to as S. The columns in Table 20b can be interpreted in the same way as those in Table 20a . For instance, UTD G shows the percentage of pairs in S extracted by UTD G , and UTD G -only shows the percentage of pairs in S that are extracted by UTD G but not the other two systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 847,
"end": 856,
"text": "Table 20a",
"ref_id": "TABREF34"
},
{
"start": 898,
"end": 907,
"text": "Table 20b",
"ref_id": "TABREF34"
},
{
"start": 1151,
"end": 1160,
"text": "Table 20b",
"ref_id": "TABREF34"
},
{
"start": 1208,
"end": 1217,
"text": "Table 20a",
"ref_id": "TABREF34"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "As can be seen in Table 20b , 1330 erroneous links are established by the three systems, of which 19.5% are established by UTD G , 20.2% by UTD P , and 69.0% by DFKI P ; in addition, 13.7% of these erroneous links are only established by UTD G , 15.6% are only established by UTD P , and 63.9% are only established by DFKI. These results again reveal that DFKI P establishes a lot more erroneous links than the UTD systems in each category of anaphors it handles. Interestingly, it attempts to resolve \"the\", which should not have appeared in the training data as an anaphor. Considering the \"x-only\" results, we see that there is a fairly large percentage of links in each category that are identified by only one of the three systems, suggesting that the three systems are quite different from each other in their resolution behavior.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 27,
"text": "Table 20b",
"ref_id": "TABREF34"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "To get a better idea of how far a discourse deixis can be from its antecedent, we show in Table 21 the relevant statistics collected from the four test sets. Specifically, the row labeled \"Gold\" shows the distribution of gold links over the number of sentences between a deixis and its antecedent. (If the sentence distance is 0, it means that the deixis refers to the sentence in which it appears.) As can be seen, the results are consistent with our intuition: a deictic expression most likely has the immediately preceding sentence (i.e., distance = 1) as its referent; in addition, the number of links drops as distance increases. More than 90% of the antecedents are at most four sentences away from their anaphors. In other words, if a discourse deixis resolver simply employs the closest five sentences preceding an anaphor as its candidate antecedents, they should cover more than 90% of the correct antecedents.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Table 21",
"ref_id": "TABREF36"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "The next three rows of Table 21 show the distributions of the links identified by the three resolvers, UTD G , UTD P , and DFKI P . Interestingly, these three distributions all have similar shapes to the gold distribution: they all peak at distance = 1 and generally drop as the sentence distance increases. Next, we tease apart the correct links from the wrong links. The distributions of the correctly predicted links as well as the distributions of the incorrectly predicted links created by the three resolvers over sentence distances are shown in the next two blocks of results. Comparing UTD P and DFKI P , we see that DFKI P does not posit any links between an anaphor and the sentence in which it appears, but it establishes more correct links than UTD P when the sentence distance is between 1 and 3. Nevertheless, as discussed before, it posits a substantially larger number of erroneous links than the two UTD resolvers.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Table 21",
"ref_id": "TABREF36"
}
],
"eq_spans": [],
"section": "Resolution",
"sec_num": "4.2"
},
{
"text": "At first glance it appears that discourse deixis resolution is more challenging the entity coreference resolution and bridging resolution. The reason is that while various string-matching facilities can be used to identify some of the entity coreference relations and bridging relations, they cannot be applied to resolve deictic expressions as there is no content word overlap between a deixis and its antecedent. However, this task has certain characteristics that make it somewhat easier that it seems. First, for antecedents, the unit of annotation is a sentence/utterance; moreover, an antecedent cannot be bigger than a turn (i.e., the utterances produced by a speaker within a turn). These constraints on antecedent annotation can be exploited to significantly reduce the search space of candidate antecedents. Better still, as discussed in the previous subsection, there is a recency constraint that can be exploited to further reduce this search space: a deixis's antecedent typically appears close to it. In contrast, any multi-word expression can be a valid antecedent for an anaphor in entity coreference and bridging; moreover, in some of the entity coreference relations and bridging relations, the two mentions involved can be far apart from each other. Hence, to achieve good performance, a coreference/bridging resolver typically needs to work with a larger search space than a discourse deixis resolver.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "The key factor that appears to be limiting the performance of the participating systems is anaphor recognition. The most frequent deictic expressions such as \"that\", \"this\", and \"it\" can also serve as an identity or bridging anaphor, and determining whether a mention is deictic is a key challenge in discourse deixis resolution. As discussed earlier, the low recognition and resolution results achieved by DFKI's system can largely be attributed to its being weak at determining whether a given expression such as \"this\" or \"that\" is used deictically and its being overly liberal in classifying these words as deictic and resolving them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4.3"
},
{
"text": "We presented a cross-team analysis of the systems that participated in each of the three tracks of the CODI-CRAC 2021 shared task. As noted in the introduction, conducting a systematic analysis that can provide insightful observations is by no means a trivial task. We believe that future cross-team analyses can be improved in a number of ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "First, any analysis should be based on the links identified by a system rather than the output clusters they generated. The reason is that a cluster contains both the links identified by a system and the links automatically created via transitive closure. Hence, to better understand the mistakes made by a resolver, we should request the teams to provide the links their systems identify in addition to the clusters they produce so that we can conduct cross-team analyses on the links instead. We do note, however, that this may not be easy for entity-based systems, where a link is established between an anaphor and one of its preceding clusters. For these systems, assumptions may need to be made. For example, when merging takes place, we may assume that a link is established between the anaphor and the member of the cluster that is closest to the anaphor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Second, to facilitate cross-team comparisons, the teams should be asked to run diagnostic tests provided by the organizers on their systems so that additional insights into their behavior can be gained. For instance, since mention extraction performance has a significant correlation with resolution performance, we will not be able to quantify its impact on resolution performance or directly compare different models in terms of their resolution performance that is not affected by their mention detection performance unless we provide a system with gold mentions. Hence, a useful diagnostic test, which was employed in the CoNLL-2012 shared task on Unrestricted Multilingual Coreference (Pradhan et al., 2012) , involves running the systems on the test data when gold mentions are given. Another useful diagnostic test, which involves running the bridging resolvers and the discourse deixis resolvers on the test data when gold anaphors are given, would allow us to directly compare the resolvers in terms of their antecedent selection performance. While this shared task has a Gold phase for the bridging track and the discourse deixis track in which gold mentions are given, these gold mentions are somewhat different from what one would expect. Specifically, while the participants expected to be given task-specific gold mentions, they were instead provided with gold mentions that were created by taking the union of the three sets of gold mentions collected from the three tracks. Worse still, the gold mentions provided in the discourse deixis track did not even include the antecedents.",
"cite_spans": [
{
"start": 690,
"end": 712,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Unfortunately, this somewhat unconventional definition of gold mentions was not clearly communicated to the participants and has caused some confusion among them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Third, our analysis could be improved with an analysis of annotation quality. Strictly speaking, an analysis of annotation quality should not be part of a cross-team analysis, but for this shared task annotation quality may have played a role in the performance of the participating systems given that there are linking decisions that we do not agree with based on our casual inspection of the annotated data. Having said that, we are not sure whether it is possible for us to assess annotation quality since the annotation guidelines are not made available to the participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Fourth, from an analysis point of view, it may be a good idea to include LEA (Moosavi and Strube, 2016) as one of the evaluation metrics. As Moosavi and Strube point out, it is not easy to interpret the scores provided by existing scorers such as MUC, B 3 , and CEAF e and LEA is designed to partially address this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Fifth, our analysis did not focus on the differences among the four datasets. For instance, in discourse deixis resolution, UTD G achieved significantly better results than UTD P on Persuasion but not the other datasets. A cross-dataset analysis could shed light on why systems exhibit different trends on different datasets. Having said that, the organizers should take an active role in explaining to the participants the differences among the different datasets and the unique challenges associated with each of them so that the participants know why these four datasets were chosen, rather than have them figure these differences out on their own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
},
{
"text": "Finally, to facilitate cross-team analyses, the organizers should include as much relevant information in the system prediction files that they make available to the participants as possible. Currently, these files merely contain the pairwise predictions made by the systems as well as the gold links they missed. Some potentially useful information that can also be provided in these files includes the sentence/turn distance between the mention pairs and their surrounding contexts. While this information can be extracted by the participants from the raw system outputs, simply laying the burden on them will likely deter them from conducting an analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was supported in part by NSF Grant IIS-1528037. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of the NSF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Anaphora resolution in dialogue: Description of the DFKI-TalkingRobots system for the CODI-CRAC 2021 shared-task",
"authors": [
{
"first": "Tatiana",
"middle": [],
"last": "Anikina",
"suffix": ""
},
{
"first": "Cennet",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Skachkova",
"suffix": ""
},
{
"first": "Siyu",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Sharmila",
"middle": [],
"last": "Upadhyaya",
"suffix": ""
},
{
"first": "Ivana",
"middle": [],
"last": "Kruijff-Korbayova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatiana Anikina, Cennet Oguz, Natalia Skachkova, Siyu Tao, Sharmila Upadhyaya, and Ivana Kruijff- Korbayova. 2021. Anaphora resolution in dialogue: Description of the DFKI-TalkingRobots system for the CODI-CRAC 2021 shared-task. In Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the First International Conference on Language Resources and Evaluation Workshop on Linguistic Coreference",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of the First International Conference on Language Resources and Evaluation Workshop on Linguistic Coreference, pages 563-566.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Enhanced word representations for bridging anaphora resolution",
"authors": [
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "1--7",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Yufang Hou. 2018. Enhanced word representations for bridging anaphora resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 2 (Short Pa- pers), pages 1-7, New Orleans, Louisiana. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bridging anaphora resolution as question answering",
"authors": [],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1428--1438",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.132"
]
},
"num": null,
"urls": [],
"raw_text": "Yufang Hou. 2020. Bridging anaphora resolution as question answering. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 1428-1438, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The CODI-CRAC 2021 shared task on anaphora, bridging, and discourse deixis in dialogue",
"authors": [
{
"first": "Sopan",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Juntao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Manuvinakurike",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [],
"last": "Ros\u00e9",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sopan Khosla, Juntao Yu, Ramesh Manuvinakurike, Vincent Ng, Massimo Poesio, Michael Strube, and Carolyn Ros\u00e9. 2021. The CODI-CRAC 2021 shared task on anaphora, bridging, and discourse deixis in dialogue. In Proceedings of the CODI-CRAC 2021",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dominican Republic",
"authors": [],
"year": null,
"venue": "Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue, Punta Cana, Dominican Repub- lic. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The pipeline model for resolution of anaphoric reference and resolution of entity reference",
"authors": [
{
"first": "Hongjin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Damrin",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Harksoo",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongjin Kim, Damrin Kim, and Harksoo Kim. 2021. The pipeline model for resolution of anaphoric ref- erence and resolution of entity reference. In Pro- ceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dia- logue, Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural anaphora resolution in dialogue",
"authors": [
{
"first": "Hideo",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Shengjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideo Kobayashi, Shengjie Li, and Vincent Ng. 2021. Neural anaphora resolution in dialogue. In Pro- ceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dia- logue, Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bridging resolution: A survey of the state of the art",
"authors": [
{
"first": "Hideo",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3708--3721",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.331"
]
},
"num": null,
"urls": [],
"raw_text": "Hideo Kobayashi and Vincent Ng. 2020. Bridging res- olution: A survey of the state of the art. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 3708-3721, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Errordriven analysis of challenges in coreference resolution",
"authors": [
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "265--277",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan K. Kummerfeld and Dan Klein. 2013. Error- driven analysis of challenges in coreference resolu- tion. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 265-277, Seattle, Washington, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-to-end neural coreference resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1018"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Higher-order coreference resolution with coarse-tofine inference",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "687--692",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2108"
]
},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to- fine inference. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 2 (Short Papers), pages 687-692, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conundrums in entity coreference resolution: Making sense of the state of the art",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "6620--6631",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.536"
]
},
"num": null,
"urls": [],
"raw_text": "Jing Lu and Vincent Ng. 2020. Conundrums in entity coreference resolution: Making sense of the state of the art. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 6620-6631, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric",
"authors": [
{
"first": "Sadat",
"middle": [],
"last": "Nafise",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Moosavi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "632--642",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Nafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Integrating knowledge graph embeddings to improve mention representation for bridging anaphora resolution",
"authors": [
{
"first": "Onkar",
"middle": [],
"last": "Pandit",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
},
{
"first": "Liva",
"middle": [],
"last": "Ralaivola",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference",
"volume": "",
"issue": "",
"pages": "55--67",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Onkar Pandit, Pascal Denis, and Liva Ralaivola. 2020. Integrating knowledge graph embeddings to im- prove mention representation for bridging anaphora resolution. In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, pages 55-67, Barcelona, Spain (online). Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "CoNLL-2011 shared task: Modeling unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 shared task: Modeling un- restricted coreference in OntoNotes. In Proceedings of the Fifteenth Conference on Computational Nat- ural Language Learning: Shared Task, pages 1-27, Portland, Oregon, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An end-toend approach for full bridging resolution",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Renner",
"suffix": ""
},
{
"first": "Priyansh",
"middle": [],
"last": "Trivedi",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Maheshwari",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Gilleron",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Renner, Priyansh Trivedi, Gaurav Maheshwari, R\u00e9mi Gilleron, and Pascal Denis. 2021. An end-to- end approach for full bridging resolution. In Pro- ceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dia- logue, Punta Cana, Dominican Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": 1995,
"venue": "Sixth Message Understanding Conference (MUC-6): Proceedings of a Conference Held in",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Sixth Mes- sage Understanding Conference (MUC-6): Proceed- ings of a Conference Held in Columbia, Maryland, November 6-8, 1995.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Revealing the myth of higher-order inference in coreference resolution",
"authors": [
{
"first": "Liyan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8527--8533",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.686"
]
},
"num": null,
"urls": [],
"raw_text": "Liyan Xu and Jinho D. Choi. 2020. Revealing the myth of higher-order inference in coreference resolution. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8527-8533, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adapted endto-end coreference resolution system for anaphoric identities in dialogues",
"authors": [
{
"first": "Liyan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jinho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the CODI-CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liyan Xu and Jinho D. Choi. 2021. Adapted end- to-end coreference resolution system for anaphoric identities in dialogues. In Proceedings of the CODI- CRAC 2021 Shared Task on Anaphora, Bridging, and Discourse Deixis in Dialogue, Punta Cana, Do- minican Republic. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td>LIGHT</td><td/><td/><td>AMI</td><td/><td/><td>Persuasion</td><td/><td colspan=\"2\">Switchboard</td><td/></tr><tr><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "(henceforth DFKI). Emory 89.2 92.5 90.8 82.2 90.2 86.0 90.6 90.7 90.6 85.3 89.8 87.5 UTD 92.3 91.6 92.0 86.6 78.6 82.4 91.3 89.7 90.5 89.2 86.1 87.6 KU 85.6 92.8 89.1 79.4 89.3 84.0 83.3 92.5 87.7 78.7 89.8 83.8 DFKI 84.8 82.6 83.7 75.4 65.8 70.3 79.8 77.5 78.6 79.3 77.9 78.6 Table 1: Entity coreference resolution: mention extraction results."
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF4": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Entity coreference resolution: results on singleton and non-singleton cluster identification."
},
"TABREF8": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Entity coreference resolution: resolution results at the pairwise level."
},
"TABREF10": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Entity coreference resolution: coverage of wrong links at the pairwise level."
},
"TABREF13": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF14": {
"content": "<table><tr><td/><td/><td>LIGHT</td><td/><td/><td>AMI</td><td/><td/><td>Persuasion</td><td/><td colspan=\"2\">Switchboard</td></tr><tr><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">UTD P</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>KU P</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"13\">Recognition 28.8 54.7 37.7 32.3 38.4 35.1 20.9 60.4 31.1 29.7 43.4 35.3</td></tr><tr><td>Resolution</td><td colspan=\"3\">10.3 19.5 13.5</td><td>9.4</td><td colspan=\"2\">11.2 10.3</td><td>8.3</td><td colspan=\"2\">24.0 12.3</td><td>9.3</td><td colspan=\"2\">13.5 11.0</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">UTD G</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"13\">Recognition 34.7 40.7 37.5 37.0 42.2 39.4 43.0 52.1 47.1 37.7 50.9 43.3</td></tr><tr><td>Resolution</td><td colspan=\"12\">18.3 21.4 19.7 18.4 21.0 19.6 28.7 34.7 31.4 18.4 24.8 21.1</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">KU G</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"13\">Recognition 38.2 34.7 36.4 41.9 30.8 35.5 31.3 53.1 39.4 41.2 30.9 35.3</td></tr><tr><td>Resolution</td><td colspan=\"12\">17.5 15.9 16.7 18.1 13.3 15.3 14.9 25.3 18.8 21.4 16.1 18.3</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">INRIA G</td><td/><td/><td/><td/><td/></tr><tr><td colspan=\"13\">Recognition 34.8 11.8 17.6 34.1 14.4 20.2 46.7 17.0 24.9 34.2 24.9 28.8</td></tr><tr><td>Resolution</td><td>18.4</td><td>6.3</td><td>9.4</td><td>10.1</td><td>4.3</td><td>6.0</td><td colspan=\"3\">30.5 11.1 16.3</td><td>9.2</td><td>6.7</td><td>7.8</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Recognition 21.8 44.3 29.2 26.8 32.9 29.5 29.6 38.9 33.6 28.9 32.2 30.4 Resolution 10.4 21.2 14.0 12.1 14.8 13.3 19.3 25.3 21.9 14.5 16.1 15.3"
},
"TABREF15": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF17": {
"content": "<table><tr><td/><td/><td colspan=\"4\">Coverage of gold anaphors</td><td/><td/><td colspan=\"3\">Coverage of wrong anaphors</td><td/></tr><tr><td>Group</td><td>%</td><td colspan=\"5\">count none T-only K-only I-only</td><td>%</td><td colspan=\"4\">count T-only K-only I-only</td></tr><tr><td colspan=\"3\">Overall 100.0 2526</td><td>35.6</td><td>20.2</td><td>12.3</td><td>3.5</td><td colspan=\"2\">100.0 3338</td><td>37.5</td><td>26.4</td><td>12.3</td></tr><tr><td>1</td><td>2.8</td><td>71</td><td>56.3</td><td>11.3</td><td>5.6</td><td>8.5</td><td>8.9</td><td>297</td><td>26.3</td><td>37.0</td><td>19.2</td></tr><tr><td>2</td><td>0.0</td><td>1</td><td>100.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.1</td><td>5</td><td>0.0</td><td>60.0</td><td>40.0</td></tr><tr><td>3</td><td>2.8</td><td>71</td><td>73.2</td><td>5.6</td><td>7.0</td><td>2.8</td><td>3.5</td><td>117</td><td>47.0</td><td>31.6</td><td>12.8</td></tr><tr><td>4</td><td>0.9</td><td>23</td><td>56.5</td><td>13.0</td><td>0.0</td><td>0.0</td><td>1.0</td><td>35</td><td>48.6</td><td>5.7</td><td>31.4</td></tr><tr><td>5</td><td>0.0</td><td>0</td><td>100.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>1</td><td>0.0</td><td>100.0</td><td>0.0</td></tr><tr><td>6</td><td>1.8</td><td>46</td><td>17.4</td><td>19.6</td><td>15.2</td><td>2.2</td><td>1.9</td><td>62</td><td>30.6</td><td>40.3</td><td>9.7</td></tr><tr><td>7</td><td>1.9</td><td>49</td><td>83.7</td><td>2.0</td><td>12.2</td><td>0.0</td><td>1.0</td><td>35</td><td>20.0</td><td>71.4</td><td>8.6</td></tr><tr><td>8</td><td>0.0</td><td>1</td><td>100.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.1</td><td>3</td><td>0.0</td><td>0.0</td><td>100.0</td></tr><tr><td>9</td><td>1.3</td><td>32</td><td>75.0</td><td>6.2</td><td>3.1</td><td>3.1</td><td>0.6</td><td>20</td><td>40.0</td><td>55.0</td><td>5.0</td></tr><tr><td>10</td><td>88.4</td><td>2232</td><td>32.2</td><td>21.7</td><td>12.9</td><td>3.5</td><td>82.8</td><td>2763</td><td>38.7</td><td>24.1</td><td>11.3</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Bridging resolution (Predicted phase): coverage of correct and wrong anaphors."
},
"TABREF18": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF20": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF22": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF23": {
"content": "<table><tr><td/><td/><td>Recalled</td><td>Missed</td><td/></tr><tr><td>relation/category</td><td>anaphor</td><td>antecedent</td><td>T P K P T G K G I G anaphor</td><td>antecedent</td></tr><tr><td>Part-whole</td><td colspan=\"2\">these waters the sea</td><td>a bird</td><td>a beak</td></tr><tr><td>Is-a</td><td>a parent</td><td>the father</td><td>a chrysler</td><td>car</td></tr><tr><td>Instance-of (same head)</td><td>a car</td><td>cars</td><td>a college</td><td>colleges</td></tr><tr><td>Instance-of (diff heads)</td><td>a highway</td><td>the road</td><td>humans</td><td>i</td></tr><tr><td>Related/Associated</td><td>any amount</td><td>your payment</td><td>sound</td><td>volume</td></tr><tr><td>Number</td><td>fifteen</td><td>seventeen</td><td>fifty</td><td>the tenth</td></tr><tr><td>Pronoun pairs</td><td>we</td><td>you</td><td>we</td><td>we</td></tr><tr><td>Pro, non-pro pairs</td><td>they</td><td>texas</td><td>people</td><td>they</td></tr><tr><td colspan=\"2\">Demonstrative pronouns this</td><td>an adventure</td><td>conceptual design</td><td>this</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "are applicable to the Gold results. For"
},
"TABREF24": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Bridging resolution: examples of gold bridging pairs that belong to different relations/categories."
},
"TABREF25": {
"content": "<table><tr><td/><td/><td/><td>LIGHT</td><td/><td/><td>AMI</td><td/><td/><td>Persuasion</td><td/><td colspan=\"2\">Switchboard</td><td/></tr><tr><td/><td/><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"3\">Overall 66.2 Anaphor UTD G UTD G -UTD P -</td><td>65.0 73.8</td><td>--</td><td>--</td><td>61.9 64.4</td><td>--</td><td>--</td><td>77.2 65.9</td><td>--</td><td>--</td><td>74.9 71.1</td><td>--</td></tr><tr><td/><td>DFKI P</td><td>-</td><td>56.2</td><td>-</td><td>-</td><td>81.4</td><td>-</td><td>-</td><td>69.1</td><td>-</td><td>-</td><td>68.1</td><td>-</td></tr><tr><td/><td>UTD G</td><td>-</td><td>37.5</td><td>-</td><td>-</td><td>32.9</td><td>-</td><td>-</td><td>58.9</td><td>-</td><td>-</td><td>52.4</td><td>-</td></tr><tr><td>Antecedent</td><td>UTD P</td><td>-</td><td>27.7</td><td>-</td><td>-</td><td>20.5</td><td>-</td><td>-</td><td>21.2</td><td>-</td><td>-</td><td>21.5</td><td>-</td></tr><tr><td/><td>DFKI P</td><td>-</td><td>35.7</td><td>-</td><td>-</td><td>37.9</td><td>-</td><td>-</td><td>49.3</td><td>-</td><td>-</td><td>39.4</td><td>-</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "49.0 56.3 54.5 45.2 49.4 61.8 67.3 64.4 48.2 61.8 54.2 UTD P 65.2 46.9 54.5 60.2 39.1 47.4 72.3 41.6 52.8 64.4 42.2 51.0 DFKI P 25.2 44.3 32.1 18.5 56.3 27.8 31.5 58.4 40.9 27.7 51.3 36.0"
},
"TABREF26": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: mention extraction results."
},
"TABREF27": {
"content": "<table><tr><td>Overall</td><td colspan=\"3\">100.0 1371</td><td>65.0</td><td>30.4</td><td>28.9</td><td>29.5</td><td>1.7</td><td>0.8</td><td>2.4</td></tr><tr><td>that</td><td colspan=\"2\">29.3</td><td>402</td><td>2.2</td><td>86.6</td><td>85.3</td><td>92.3</td><td>1.2</td><td>0.7</td><td>6.0</td></tr><tr><td>it</td><td>6.9</td><td/><td>95</td><td>55.8</td><td>36.8</td><td>32.6</td><td>0.0</td><td>11.6</td><td>7.4</td><td>0.0</td></tr><tr><td>this</td><td>1.8</td><td/><td>25</td><td>0.0</td><td>76.0</td><td>60.0</td><td>100.0</td><td>0.0</td><td>0.0</td><td>20.0</td></tr><tr><td>which</td><td>0.7</td><td/><td>10</td><td>10.0</td><td>50.0</td><td>30.0</td><td>70.0</td><td>10.0</td><td>0.0</td><td>40.0</td></tr><tr><td>the same</td><td>0.3</td><td/><td>4</td><td>25.0</td><td>75.0</td><td>25.0</td><td>0.0</td><td>50.0</td><td>0.0</td><td>0.0</td></tr><tr><td>The rest</td><td colspan=\"2\">60.9</td><td>835</td><td>99.0</td><td>0.8</td><td>0.4</td><td>0.1</td><td>0.5</td><td>0.1</td><td>0.0</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"4\">(b) Coverage of wrong mentions</td><td/></tr><tr><td colspan=\"6\">mention count UTD G Overall % 100.0 1202 43.3</td><td>17.4</td><td>61.6</td><td>29.8</td><td>4.1</td><td>49.9</td></tr><tr><td>that</td><td/><td>43.3</td><td>520</td><td colspan=\"2\">25.8</td><td>23.5</td><td>90.6</td><td>2.7</td><td>2.1</td><td>67.7</td></tr><tr><td>this</td><td/><td>6.3</td><td>76</td><td colspan=\"2\">17.1</td><td>23.7</td><td>89.5</td><td>3.9</td><td>2.6</td><td>71.1</td></tr><tr><td>it</td><td/><td>4.5</td><td>54</td><td colspan=\"2\">61.1</td><td>81.5</td><td>0.0</td><td>18.5</td><td>38.9</td><td>0.0</td></tr><tr><td>which</td><td/><td>3.5</td><td>42</td><td>9.5</td><td/><td>11.9</td><td>95.2</td><td>2.4</td><td>2.4</td><td>81.0</td></tr><tr><td colspan=\"2\">that way</td><td>0.4</td><td>5</td><td colspan=\"2\">80.0</td><td>60.0</td><td>0.0</td><td>40.0</td><td>20.0</td><td>0.0</td></tr><tr><td colspan=\"2\">The rest</td><td>42.0</td><td>505</td><td colspan=\"2\">65.7</td><td>3.4</td><td>31.9</td><td>65.0</td><td>2.6</td><td>31.7</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "(a) Coverage of gold mentions mention % count none UTD G UTD P DFKI P UTD G -only UTD P -only DFKI P -only UTD P DFKI P UTD G -only UTD P -only DFKI P -only"
},
"TABREF28": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: per-anaphor mention extraction results."
},
"TABREF30": {
"content": "<table><tr><td/><td/><td>LIGHT</td><td/><td/><td>AMI</td><td/><td/><td>Persuasion</td><td>Switchboard</td></tr><tr><td/><td>F</td><td colspan=\"2\">ns. F s. F</td><td>F</td><td colspan=\"2\">ns. F s. F</td><td>F</td><td>ns. F s. F</td><td>F</td><td>ns. F s. F</td></tr><tr><td>UTD G</td><td colspan=\"2\">43.4 43.9</td><td colspan=\"3\">7.7 36.9 35.2</td><td colspan=\"3\">8.4 52.1 52.0</td><td>4.2 40.4 36.6</td><td>9.5</td></tr><tr><td>UTD P</td><td colspan=\"2\">42.7 47.0</td><td colspan=\"3\">2.2 35.4 37.9</td><td colspan=\"3\">2.6 39.6 42.1</td><td>1.0 35.4 39.1</td><td>2.0</td></tr><tr><td colspan=\"3\">DFKI P 20.8 17.1</td><td colspan=\"3\">6.9 17.2 15.3</td><td colspan=\"3\">3.4 23.6 22.8</td><td>2.1 23.8 22.6</td><td>3.8</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: official resolution results."
},
"TABREF31": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: results on singleton and non-singleton cluster identification."
},
"TABREF33": {
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"4\">(a) Coverage of correct links</td><td/><td/></tr><tr><td colspan=\"5\">anaphor count None UTD G Overall % 100.0 504 46.0 37.7</td><td>26.2</td><td>23.6</td><td>14.3</td><td>8.3</td><td>6.3</td></tr><tr><td>that</td><td>68.1</td><td>343</td><td>31.8</td><td>47.8</td><td>33.5</td><td>33.5</td><td>16.0</td><td>9.3</td><td>8.7</td></tr><tr><td>it</td><td>17.7</td><td>89</td><td>76.4</td><td>16.9</td><td>13.5</td><td>0.0</td><td>10.1</td><td>6.7</td><td>0.0</td></tr><tr><td>this</td><td>4.4</td><td>22</td><td>45.5</td><td>31.8</td><td>18.2</td><td>13.6</td><td>22.7</td><td>13.6</td><td>9.1</td></tr><tr><td>which</td><td>2.0</td><td>10</td><td>70.0</td><td>30.0</td><td>0.0</td><td>10.0</td><td>20.0</td><td>0.0</td><td>0.0</td></tr><tr><td>the same</td><td>0.8</td><td>4</td><td>75.0</td><td>0.0</td><td>25.0</td><td>0.0</td><td>0.0</td><td>25.0</td><td>0.0</td></tr><tr><td>The rest</td><td>7.1</td><td>36</td><td>97.2</td><td>2.8</td><td>0.0</td><td>0.0</td><td>2.8</td><td>0.0</td><td>0.0</td></tr><tr><td/><td/><td/><td/><td colspan=\"4\">(b) Coverage of wrong links</td><td/><td/></tr><tr><td colspan=\"5\">anaphor count UTD G Overall % 100.0 1330 19.5</td><td>20.2</td><td>69.0</td><td>13.7</td><td>15.6</td><td>63.9</td></tr><tr><td>that</td><td>69.5</td><td>925</td><td colspan=\"2\">21.8</td><td>21.1</td><td>68.1</td><td>14.5</td><td>15.6</td><td>61.2</td></tr><tr><td>this</td><td>7.8</td><td>104</td><td colspan=\"2\">14.4</td><td>15.4</td><td>78.8</td><td>9.6</td><td>10.6</td><td>75.0</td></tr><tr><td>it</td><td>5.0</td><td>67</td><td colspan=\"2\">43.3</td><td>64.2</td><td>0.0</td><td>35.8</td><td>56.7</td><td>0.0</td></tr><tr><td>which</td><td>4.4</td><td>59</td><td>8.5</td><td/><td>16.9</td><td>74.6</td><td>8.5</td><td>16.9</td><td>74.6</td></tr><tr><td>the</td><td>1.7</td><td>22</td><td>0.0</td><td/><td>0.0</td><td>100.0</td><td>0.0</td><td>0.0</td><td>100.0</td></tr><tr><td>The rest</td><td>11.5</td><td>153</td><td>5.9</td><td/><td>2.6</td><td>91.5</td><td>5.9</td><td>2.6</td><td>91.5</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: per-anaphor resolution results at the pairwise level. UTD P DFKI P UTD G -only UTD P -only DFKI P -only UTD P DFKI P UTD G -only UTD P -only DFKI P -only"
},
"TABREF34": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: per-anaphor resolution results at the pairwise level."
},
"TABREF36": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Discourse deixis resolution: distribution of gold/predicted links over the sentence distances between the anaphor and the antecedents."
}
}
}
}