Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W99-0207",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:09:40.081404Z"
},
"title": "Corpus-Based Anaphora Resolution Towards Antecedent Preference",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Paul",
"suffix": "",
"affiliation": {
"laboratory": "ATR Interpreting Telecommunications Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Kazuhide",
"middle": [],
"last": "Yamamoto",
"suffix": "",
"affiliation": {
"laboratory": "ATR Interpreting Telecommunications Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "ATR Interpreting Telecommunications Research Laboratories",
"institution": "",
"location": {
"addrLine": "2-2 Hikaridai, Seika-cho, Soraku-gun",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we propose a corpus-based approach to anaphora resolution combining a machine learning method and statistical information. First, a decision tree trained on an annotated corpus determines the coreference relation of a given anaphor and antecedent candidates and is utilized as a filter in order to reduce the number of potential candidates. In the second step, preference selection is achieved by taking into account the frequency information of coreferential and non-referential pairs tagged in the training corpus as well as distance features within the current discourse. Preliminary experiments concerning the resolution of Japanese pronouns in spoken-language dialogs result in a success rate of 80.6%.",
"pdf_parse": {
"paper_id": "W99-0207",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we propose a corpus-based approach to anaphora resolution combining a machine learning method and statistical information. First, a decision tree trained on an annotated corpus determines the coreference relation of a given anaphor and antecedent candidates and is utilized as a filter in order to reduce the number of potential candidates. In the second step, preference selection is achieved by taking into account the frequency information of coreferential and non-referential pairs tagged in the training corpus as well as distance features within the current discourse. Preliminary experiments concerning the resolution of Japanese pronouns in spoken-language dialogs result in a success rate of 80.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference information is relevant for numerous NLP systems. Our interest in anaphora resolution is based on the demand for machine translation systems to be able to translate (possibly omitted) anaphoric expressions in agreement with the morphosyntactic characteristics of the referred object in order to prevent contextual misinterpretations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far various approaches 1 to anaphora resolution have been proposed. In this paper a machine learning approach (decision tree) is combined with a preference selection method based on the frequency information of non-/coreferential pairs tagged in the corpus as well as distance features within the current discourse. The advantage of machine learning approaches is that they result in modular anaphora resolution systems automatically trainable from a corpus with no 1See section 4 for a more detailed comparison with related research. or only a minimal amount of human intervention. In the case of decision trees, we do have to provide information about possible antecedent indicators (syntactic, semantic, and pragmatic features) contained in the corpus, but the relevance of features for the resolution task is extracted automatically from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Machine learning approaches using decision trees proposed so far have focused on preference selection criteria directly derived from the decision tree results. The work described in (Conolly et al., 1994) utilized a decision tree capable of judging which one of two given anaphor-antecedent pairs is \"better\". Due to the lack of a strong assumption on \"transitivity\", however, this sorting algorithm is more like a greedy heuristic search as it may be unable to find the \"best\" solution.",
"cite_spans": [
{
"start": 182,
"end": 204,
"text": "(Conolly et al., 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The preference selection for a single antecedent in (Aone and Bennett, 1995) is based on the maximization of confidence values returned from a pruned decision tree for given anaphor-candidate pairs. However, decision trees are characterized by an independent learning of specific features, i.e., relations between single attributes cannot be obtained automatically. Accordingly, the use of dependency factors for preference selection during decision tree training requires that the artificially created attributes expressing these dependencies be defined. However, this not only extends human intervention into the automatic learning procedure (i.e., which dependencies are important?), but can also result in some drawbacks on the contextual adaptation of preference selection methods.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Aone and Bennett, 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The preference selection in our approach is based on the combination of statistical frequency information and distance features in the discourse. Therefore, our decision tree is not applied directly to the task of preference selection, but aims at the elimination of irrelevant candidates based on the knowledge obtained from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The decision tree is trained on syntactic (lexical word attributes), semantic, and primitive discourse (distance, frequency) information and determines the coreferential relation between an anaphor and antecedent Candidate in the given context. Irrelevant antecedent candidates are filtered out, achieving a noise reduction for the preference selection algorithm. A preference value is assigned to each \" potential anaphor-candidate pair depending on the proportion of non-/coreferential occurrences of the pair in the training corpus (frequency ratio) and the relative position of both elements in the discourse (distance). The candidate with the maximal preference value is resolved as the antecedent of the anaphoric expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section we introduce a new approach to anaphora resolution based on coreferential properties automatically extracted from a training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus-Based Anaphora Resolution",
"sec_num": "2"
},
{
"text": "In the first step, the decison tree filter is trained on the linguistic, discourse and coreference information annotated in the training corpus which is described in section 2.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus-Based Anaphora Resolution",
"sec_num": "2"
},
{
"text": "Preference Selection Figure 1 applies the coreference filter (cf. section 2.2) to all anaphorcandidate pairs (Ai + C/#) found in the discourse history. The detection of anaphoric expressions is out of the scope of this paper and just reduced to tags in our annotated corpus. Antecedent candidates are identified according to noun phrase part-of-speech tags. The reduced set (Ai + C/~) forms the input of the preference algorithm which selects the most salient candidate C~ as described in section 2.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comferenc\u00a2 Analysis",
"sec_num": null
},
{
"text": "Preliminary experiments are conducted for the task of pronominal anaphora resolution and the performance of our system is evaluated in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comferenc\u00a2 Analysis",
"sec_num": null
},
{
"text": "For our experiments we use the ATR-ITL Speech and Language Database (Takezawa et al., 1998) consisting of 500 Japanese spoken-language dialogs annotated with coreferential tags. It includes nominal, pronominal, and ellipsis annotations, whereby the anaphoric expressions used in our experiments are limited to those referring to nominal antecedents (nominal: 2160, pronominal: 526, ellipsis: 3843). Besides the anaphor type, we also include morphosyntactic information like stem form and inflection attributes for each surface word as well as semantic codes for content words (Ohno and Hamanishi, 1981) in this corpus. ",
"cite_spans": [
{
"start": 68,
"end": 91,
"text": "(Takezawa et al., 1998)",
"ref_id": "BIBREF14"
},
{
"start": 576,
"end": 602,
"text": "(Ohno and Hamanishi, 1981)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "rl: ~ ~) ~)~'~ O ~-'~H\"~-o \">\"7\" 4 -t,'T-)l~\"~'~b~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "II~,% -7\"4--..x.--~,.x.--9\"4\",-x---~o [yes] rr] [A] [N] {A] [K] [A] [be] \"It's T A N A K A.\" [yes] [tenth] [here] [arrival]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "[be] \"Okay, you will arrive here on the tenth, right?\" According to the tagging guidelines used for our corpus an anaphoric tag refers to the most recent antecedent found in the dialog. However, this antecedent might also refer to a previous one, e.g. Based on the corpus annotations we extract the frequency information of coreferential anaphorantecedent pairs and non-referential pairs from the training data. For each non-/coreferential pair the occurrences of surface and stem form as well as semantic code combinations are counted. In Table 1 some examples are given for pronoun anaphora, whereas the expressions \"{...}\" denote semantic classes assigned to the respective words.",
"cite_spans": [],
"ref_spans": [
{
"start": 540,
"end": 547,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "The values freq +, freqand ratio and their usage are described in more detailed in section 2.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "Moreover, each dialog is subdivided into utterances consisting of one or more clauses. Therefore, distance features are available on the utterance, clause, candidate, and morpheme levels. For example, the distance values of the pronoun (r3)\"~-\u00a29 [here]\" and the antecedent (rl)\" \"Y if-4 \u2022 ff-)l~ [City Hotel]\" in our sample dialog in Figure 2 are d~tte~=4, dclaus~=7, dcand=14, dmorph=40.",
"cite_spans": [],
"ref_spans": [
{
"start": 334,
"end": 342,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Data Corpus",
"sec_num": "2.1"
},
{
"text": "To learn the coreference relations from our corpus we have chosen a C4.52-1ike machine learning algorithm without pruning. The training attributes consist of lexical word attributes (surface word, stem form, part-of-speech, semantic code, morphological attributes) applied to the anaphor, antecedent candidate, and clause predicate. In addition, features like attribute agreement, distance and frequency ratio are checked for each anaphor-candidate pair. The decision tree result consists of only two classes determining the coreference relation between the given anaphor-candidate pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreferenee Analysis",
"sec_num": "2.2"
},
{
"text": "During anaphora resolution the decision tree is used as a module determining the coreferential property of each anaphor-candidate pair. For each detected anaphoric expression a candidate list 3 is created. The decision tree filter is then successively applied to all anaphor-candidate pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreferenee Analysis",
"sec_num": "2.2"
},
{
"text": "If the decision tree results in the non-reference class, the candidate is judged as irrelevant and eliminated from the list of potential antecedents forming the input of the preference selection algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreferenee Analysis",
"sec_num": "2.2"
},
{
"text": "The primary order of candidates is given by their word distance from the anaphoric expression. A straightforward preference strategy we could choose is the selection of the most recent candidate (MRC) as the antecedent, i.e., the first element of the candidate list. The success rate of this baseline test, however, is quite low as shown in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "But, this result does not mean that the recency factor is not important at all for the determination of saliency in this task. One reason for the bad performance is the application of the baseline test to the unfiltered set of-candidates resulting in the frequent selection of non-referential antecedents. Additionally, long-range references to candidates introduced first in the dialog are quite frequent in our data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "2Cf. (Quinlan, 1993) 3A list of noun phrase candidates preceding the anaphor element in the current discourse.",
"cite_spans": [
{
"start": 5,
"end": 20,
"text": "(Quinlan, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "An examination of our corpus gives rise to suspicion that similarities to references in our training data might be useful for the identification of those antecedents. Therefore, we propose a preference selection scheme based on the combination of distance and frequency information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "First, utilizing statistical information about the frequency of coreferential anaphor-antecedent pairs (freq +) and non-referential pairs (freq-) extracted from the training data, we define the ratio of a given reference pair as follows4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "I -6 : (freq + --freq-= O) ratio = ]req + -]req- freq+ 4-]req- : otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "The value of ratio is in the range of [-1,-1-1], whereby ratio = -1 in the case of exclusive nonreferential relations and ratio --+1 in the case of exclusive coreferential relationships. In order for referential pairs occurring in the training corpus with ratio = 0 to be preferred to those without frequency information, we slightly decrease the ratio value of the latter ones by a factor 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "As mentioned above the distance plays a crucial role in our selection method, too. We define a preference value pref by normalizing the ratio value according to the distance dist given by the primary order of the candidates in the discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Selection",
"sec_num": "2.3"
},
{
"text": "The pref value is calculated for each candidate and the precedence ordered list of candidates is resorted towards the maximization of the preference factor. Similarly to the baseline test, the first element of the preferenced candidate list is chosen as the antecedent. The precedence order between candidates of the same confidence continues to remain so and thus a final decision is made in the case of a draw.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ratio pre f = dist",
"sec_num": null
},
{
"text": "The robustness of our approach is ensured by the definition of a backup strategy which ultimately selects one candidate occurring in the history in the case that all antecedent candidates are rejected by the decision tree filter. For our experiments reported in section 3 we adopted the selection of the dialoginitial candidate as the backup strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ratio pre f = dist",
"sec_num": null
},
{
"text": "For the evaluation of the experimental results described in this section we use F-measure metrics calculated by the recall and precision of the system performance. Let ~]t denote the total number of tagged 4In order to keep the formula simple the frequency types are omitted (cf. Table 1) anaphor-antecedent pairs contained in the test data, El the number of these pairs passing the decision tree filter, and ~ the number of correctly selected antecedents.",
"cite_spans": [],
"ref_spans": [
{
"start": 280,
"end": 288,
"text": "Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "During evaluation we distinguish three classes: whether the correct antecedent is the first element of the candidate list (f), is in the candidate list (i), or is filtered out by the decision tree (o). The metrics F, recall (R) and precision (P) are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Z =lfl 2xPxR P+R F= E, p= ~\"]c s Ej t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "In order to prove the feasibility of our approach we compare the four preference selection methods listed in Figure 3 . First, the baseline test MRC selects the most recent candidate as the antecedent of an anaphoric expression. The necessity of the filter and preference selection components is shown by comparing the decision tree filter scheme DT (i.e., select the first element of the filtered candidate list) and preference scheme PREF (i.e., resort the complete candidate list) against our combined method DT+PREF (i.e., resort the filtered candidate list).",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 117,
"text": "Figure 3",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "5-way cross-validation experiments are conducted for pronominal anaphora resolution. The selected antecedents are checked against the annotated correct antecedents according to their morphosyntactic and semantic attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "tagged corpus~",
"sec_num": null
},
{
"text": "We use varied numbers of training dialogs (50-400) for the training of the decision tree and the extraction of the frequency information from the corpus. Open tests are conducted on 100 non-training dialogs whereas closed tests use the training data for evaluation. The results of the different preference selection methods are shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "The baseline test MRC succeeds in resolving only 43.9% of the most recent candidates correctly as the antecedent. The best F-measure rate for DT is 65.0% and for PREF the best rate is 78.1% whereas The PREF method seems to reach a plateau at around 300 dialogs which is borne out by the closed test reaching a maximum of 81.1%. Comparing the recall rate of DT (61.2%) and DT+PREF (75.9%) with the PREF result, we might conclude that the decision tree is not much of a help due to the sideeffect of 11.8% of the correct antecedents being filtered out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "However, in contrast to the PREF algorithm, the DT method improves continuously according to the training size implying a lack of training data for the identification of potential candidates. Despite the sparse data the filtering method proves to be very effective. The average number of all candidates (history) for a given anaphor in our open data is 39 candidates which is reduced to 11 potential candidates by the decision tree filter resulting in a reduction rate of 71.8% (closed test: 81%). The number of trivial selection cases (only one candidate) increases from 2.7% (history) to 11.4% (filter; closed test: 21%). On average, two candidates are skipped in the history to select the correct antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "Moreover, the precision rates of DT (69.4%) and DT+PREF (86.0%) show that the utilization of the decision tree filter in combination with the statistical preference selection gains a relative improvement of 9% towards the preference and 16% towards the filter method. Additionally, the system proves to be quite robust, because the decision tree filters out all candidates in only 1% of the open test samples. Selecting the candidate first introduced in the dialog as a backup strategy shows the best performance due to the frequent dialog initial references contained in our data. In our approach frequency ratio and distance information plays a crucial role not only for the identification of potential candidates during decision tree filtering, but also for the calculation of the preference value for each antecedent candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "In the first case these features are used independently to characterize the training samples whereas the preference selection method is based on the dependency between the frequency and distance values of the given anaphor-candidate pair in the context of the respective discourse. The relative importance of each factor is shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "First, we compare our decision tree filter DT to those methods that do not use either frequency (DTno-freq) or distance (DT-no-dist) information. Frequency information does appear to be more relevant for the identification of potential candidates than distance features extracted from the training corpus.",
"cite_spans": [
{
"start": 96,
"end": 107,
"text": "(DTno-freq)",
"ref_id": null
},
{
"start": 120,
"end": 132,
"text": "(DT-no-dist)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "The recall performance of DT-no-freq decreases by 7.6% whereas DT-no-dist is only 1.1% below the result of the original DT filter 5. Moreover, the number of correct antecedents not passing the filter increases by 5.1% (DT-no-freq) and 0.7% (DT-no-dist) .",
"cite_spans": [
{
"start": 240,
"end": 252,
"text": "(DT-no-dist)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "However, the distance factor proves to be quite important as a preference criterion. Relying only on the frequency ratio as the preference value, the recall performance of DT+PREF-no-dist is only 73.0%, down 2.9% of the original DT+PREF method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "The effectiveness of our approach is not only based on the usage of single antecedent indicators extracted from the corpus, but also on the combination of these features for the selection of the most preferable candidate in the context of the given discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Size",
"sec_num": "3.1"
},
{
"text": "Due to the characteristics of the underlying data used in these experiments a comparison involving absolute numbers to previous approaches gives us less evidence. However, the difficulty of our task can be verified according to the baseline experiment 5So far we have considered the decision tree filter just as a black-box tool. Further investigations on tree structures, however, should give us more evidence about the relative importance of the respective features. results reported in (Mitkov, 1998) . Resolving pronouns in English technical manuals to the most recent candidate achieved a success rate of 62.5%, whereas in our experiments only 43.9% of the most recent candidates are resolved correctly as the antecedent (cf. section 3).",
"cite_spans": [
{
"start": 489,
"end": 503,
"text": "(Mitkov, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "Whereas knowledge-based systems like (Carbonell and Brown, 1988) and (Rich and LuperFoy, 1988) combining multiple resolution strategies are expensive in the cost of human effort at development time and limited ability to scale to new domains, more recent knowledge-poor approaches like (Kennedy and Boguraev, 1996) and (Mitkov, 1998) address the problem without sophisticated linguistic knowledge. Similarly to them we do not use any sentence parsing or structural analysis, but just rely on morphosyntactic and semantic word information.",
"cite_spans": [
{
"start": 37,
"end": 64,
"text": "(Carbonell and Brown, 1988)",
"ref_id": "BIBREF2"
},
{
"start": 69,
"end": 94,
"text": "(Rich and LuperFoy, 1988)",
"ref_id": "BIBREF12"
},
{
"start": 286,
"end": 314,
"text": "(Kennedy and Boguraev, 1996)",
"ref_id": "BIBREF6"
},
{
"start": 319,
"end": 333,
"text": "(Mitkov, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "Moreover, clues are used about the grammatical and pragmatic functions of expressions as in (Grosz et al., 1995) , (Strube, 1998) , \"or (Azzam et al., 1998) as well as rule-based empirical approaches like (Nakaiwa and Shirai, 1996) or (Murata and Nagao, 1997) , to determine the most salient referent. These kinds of manually defined scoring heuristics, however, involve quite an amount of human intervention which is avoided in machine learning approaches.",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Grosz et al., 1995)",
"ref_id": "BIBREF5"
},
{
"start": 115,
"end": 129,
"text": "(Strube, 1998)",
"ref_id": "BIBREF13"
},
{
"start": 136,
"end": 156,
"text": "(Azzam et al., 1998)",
"ref_id": "BIBREF1"
},
{
"start": 205,
"end": 231,
"text": "(Nakaiwa and Shirai, 1996)",
"ref_id": "BIBREF9"
},
{
"start": 235,
"end": 259,
"text": "(Murata and Nagao, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "As briefly noted in section 1, the work described in (Conolly et al., 1994) and (Aone and Bennett, 1995) differs from our approach according to the usage of the decision tree in the resolution task. In (Conolly et al., 1994) a decision tree is trained on a small number of 15 features concerning anaphor type, grammatical function, recency, morphosyntactic agreement and subsuming concepts. Given two anaphor-candidate pairs the system judges which is \"better\". However, due to the lack of a strong assumption on \"transitivity\" this sorting algorithm may be unable to find the \"best\" solution.",
"cite_spans": [
{
"start": 53,
"end": 75,
"text": "(Conolly et al., 1994)",
"ref_id": "BIBREF3"
},
{
"start": 80,
"end": 104,
"text": "(Aone and Bennett, 1995)",
"ref_id": "BIBREF0"
},
{
"start": 202,
"end": 224,
"text": "(Conolly et al., 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "Based on discourse markers extracted from lexical, syntactic, and semantic processing, the approach of (Aone and Bennett, 1995) uses 66 unary and binary attributes (lexical, syntactic, semantic, position, matching category, topic) during decision tree training. The confidence values returned from the pruned decision tree are utilized as a saliency measure for each anaphor-candidate pair in order to se-lect a single antecedent. However, we use dependency factors for preference selection which cannot be learned automatically because of the independent learning of specific features during decision tree training. Therefore, our decision tree is not applied directly to the task of preference selection, but only used as a filter to reduce the number of potential candidates for preference selection.",
"cite_spans": [
{
"start": 103,
"end": 127,
"text": "(Aone and Bennett, 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "In addition to salience preference, a statistically modeled iexical preference is exploited in (Dagan et al., 1995) by comparing the conditional probabilities of co-occurrence patterns given the occurrence of candidates. Experiments, however, are carried out on computer manual texts with mainly intrasentential references. This kind of data is also characterized by the avoidance of disambiguities and only short discourse units, which prohibits almost any long-range references. In contrast to this research, our results show that the distance factor in addition to corpus-based frequency information is quite relevant for the selection of the most salient candidate in our task.",
"cite_spans": [
{
"start": 95,
"end": 115,
"text": "(Dagan et al., 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "4"
},
{
"text": "In this paper we proposed a corpus-based anaphora resolution method combining an automatic learning algorithm for coreferential relationships with statistical preference selection in the discourse context. We proved the applicability of our approach to pronoun resolution achieving a resolution accuracy of 86.0% (precision) and 75.9% (recall) for Japanese pronouns despite the limitation of sparse data. Improvements in these results can be expected by increasing the training data as well as utilizing more sophisticated linguistic knowledge (structural analysis of utterances, etc.) and discourse information (extra-sentential knowledge, etc.) which should lead to a rise of the decision tree filter performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Preliminary experiments with nominal reference and ellipsis resolution showed promising results, too. We plan to incorporate this approach in multilingual machine translation which enables us to handle a variety of referential relations in order to improve the translation quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank Hitoshi Nishimura (ATR) for his programming support and Hideki Tanaka (ATR) for helpful personal communications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating Automated and Manual Acquisition of Anaphora Resolution Strategies",
"authors": [
{
"first": "C",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Bennett",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. of the 33th ACL",
"volume": "",
"issue": "",
"pages": "122--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Aone and S. Bennett. 1995. Evaluating Auto- mated and Manual Acquisition of Anaphora Res- olution Strategies. In Proc. of the 33th ACL, p. 122-129.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating a Focus-Based Approach to Anaphora Resolution",
"authors": [
{
"first": "S",
"middle": [],
"last": "Azzam",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Humphreys",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the 17th COLING",
"volume": "",
"issue": "",
"pages": "74--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Azzam, K. Humphreys, and R. Gaizauskas. 1998. Evaluating a Focus-Based Approach to Anaphora Resolution. In Proc. of the 17th COLING, p. 74- 78, Montreal, Canada.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Anaphora Resolution: A Multi-Strategy Approach",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brown",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. of the 12th COLING",
"volume": "",
"issue": "",
"pages": "96--101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carbonell and R. Brown. 1988. Anaphora Res- olution: A Multi-Strategy Approach. In Proc. of the 12th COLING, p. 96-101, Budapest, Hungary.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Machine Learning Approach to Anaphoric Reference",
"authors": [
{
"first": "D",
"middle": [],
"last": "Conolly",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Day",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of NEMLAP'94",
"volume": "",
"issue": "",
"pages": "255--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Conolly, J. Burger, and D. Day. 1994. A Machine Learning Approach to Anaphoric Reference. In Proc. of NEMLAP'94, p. 255-261, Manchester.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Framework for Modeling the Local Coherence of Discourse",
"authors": [
{
"first": "B",
"middle": [],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Comp. Linguistics",
"volume": "21",
"issue": "2",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Grosz, A. Joshi, and S. Weinstein. 1995. A Framework for Modeling the Local Coherence of Discourse. Comp. Linguistics, 21(2):203-225.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Anaphora for Everyone: Pronominal Anaphora Resolution without a Parser",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16th COLING",
"volume": "",
"issue": "",
"pages": "113--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Kennedy and B. Boguraev. 1996. Anaphora for Everyone: Pronominal Anaphora Resolution without a Parser. In Proc. of the 16th COLING, p. 113-118, Copenhagen, Denmark.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust pronoun resolution with limited knowledge",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the 17th COLING",
"volume": "",
"issue": "",
"pages": "869--875",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proc. of the 17th COLING, p. 869-875, Montreal, Canada.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An Estimate of Referents of Pronouns in Japanese Sentences using Examples and Surface Expressions",
"authors": [
{
"first": "M",
"middle": [],
"last": "Murata",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Natural Language Processing",
"volume": "4",
"issue": "1",
"pages": "87--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Murata and M. Nagao. 1997. An Estimate of Referents of Pronouns in Japanese Sentences us- ing Examples and Surface Expressions. Journal of Natural Language Processing, 4(1):87-110.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Anaphora Resolution of Japanese Zero Pronouns with Deictic Reference",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nakaiwa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shirai",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of the 16th COLING",
"volume": "",
"issue": "",
"pages": "812--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Nakaiwa and S. Shirai. 1996. Anaphora Resolu- tion of Japanese Zero Pronouns with Deictic Ref- erence. In Proc. of the 16th COLING, p. 812-817, Copenhagen, Denmark.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "i993. C4.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Quinlan. i993. C4.5: Programs for Machine Learning. Morgan Kaufmann.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An Architecture for Anaphora Resolution",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rich",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Luperfoy",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. of the 2nd Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "18--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Rich and S. LuperFoy. 1988. An Architecture for Anaphora Resolution. In Proc. of the 2nd Con- ference on Applied Natural Language Processing, p. 18-23, Austin, TX.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Never Look Back: An Alternative to Centering",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of the 17th COLING",
"volume": "",
"issue": "",
"pages": "1251--1257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Strube. 1998. Never Look Back: An Alternative to Centering. In Proc. of the 17th COLING, p. 1251-1257, Montreal, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Speech and language database for speech translation research in ATR",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of Oriental CO-COSDA Workshop",
"volume": "",
"issue": "",
"pages": "148--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Takezawa, T. Morimoto, and Y. Sagisaka. 1998. Speech and language database for speech transla- tion research in ATR. In Proc. of Oriental CO- COSDA Workshop, p. 148-155.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"text": "Figure 1: System outline The resolution system in Figure 1 applies the coreference filter (cf. section 2.2) to all anaphorcandidate pairs (Ai + C/#) found in the discourse history. The detection of anaphoric expressions is out of the scope of this paper and just reduced to tags in our annotated corpus. Antecedent candidates are identified according to noun phrase part-of-speech tags. The reduced set (Ai + C/~) forms the input of the preference algorithm which selects the most salient candidate C~ as described in section 2.3. Preliminary experiments are conducted for the task of pronominal anaphora resolution and the performance of our system is evaluated in section 3.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Example dialogIn the example dialog between the hotel reception (r) and a customer (c) listed inFigure 2the proper noun (rl)\"5,#-# ~Y-~P [City Hotel]\" is tagged as the antecedent of the pronoun (cl)\"~-~5 ~9 [there]\" as well as the noun (cl)\"$ff-)l~[hotel]\". An example for ellipsis is the ommitted subject (c2)\"@[it]\" referring to (r2)\"Y~-x~P [spelling]\".",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "(r3)\"~-\u00a29 ~ [here]\"-*(cl)\" ~-\u00a29 ~ [there]\"--~(rl) \" 5\" ~-~\" \u2022 ~)P [City Hotel]\". Thus, the transitive closure between the anaphora and the first mention of the antecedent in the discourse history defines the set of positive examples, e.g. (~-~ ~9, 5,if-4 $~-)P), whereas the nominal candidates outside the transitive closure are considered negative examples, e.g. (~-~5 C9, ~ qu), for coreferential relationships.",
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"num": null,
"text": "Preference selection experiments",
"type_str": "figure"
},
"FIGREF6": {
"uris": null,
"num": null,
"text": "Training size versus performance the combination of both methods achieves a success rate of 80.6%.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"text": "Frequency data",
"type_str": "table",
"content": "<table><tr><td>anaphor</td><td colspan=\"4\">candidate freq T freq-ratio</td></tr><tr><td>~\" Ca r9</td><td>mS</td><td>o</td><td>11</td><td>-1</td></tr><tr><td>~-'~</td><td>tB</td><td>0</td><td colspan=\"2\">0 -o.1</td></tr><tr><td>=~</td><td>{shop}</td><td>33</td><td>33</td><td>0</td></tr><tr><td>{demonstratives}</td><td>{shop}</td><td>51</td><td colspan=\"2\">18 0.48</td></tr></table>"
},
"TABREF3": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">: Frequency and distance dependency</td><td/></tr><tr><td>DT-no-dist</td><td>DT-no-freq</td><td>DT+PREF</td><td>DT+PREF-no-dist</td></tr><tr><td>60.1</td><td>53.6</td><td>75.9</td><td>73.0</td></tr><tr><td>68.7</td><td>64.5</td><td>86.0</td><td>82.8</td></tr><tr><td>64.1</td><td>58.5</td><td>80.6</td><td>77.6</td></tr><tr><td>12.5</td><td>16.9</td><td>11.8</td><td>11.8</td></tr><tr><td>3.2 Feature Dependency</td><td/><td/><td/></tr></table>"
}
}
}
}