Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:20:04.316169Z"
},
"title": "Classifying Ellipsis in Dialogue: A Machine Learning Approach",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London Strand",
"location": {
"postCode": "WC2R 2LS",
"settlement": "London",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London Strand",
"location": {
"postCode": "WC2R 2LS",
"settlement": "London",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London Strand",
"location": {
"postCode": "WC2R 2LS",
"settlement": "London",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a machine learning approach to bare sluice disambiguation in dialogue. We extract a set of heuristic principles from a corpus-based sample and formulate them as probabilistic Horn clauses. We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. Both learners perform well, yielding similar success rates of approx 90%. The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. (1) a. Cassie: I know someone who's a good kisser. Catherine: Who? [KP4, 512] 1 b. Sue: You were getting a real panic then.",
"pdf_parse": {
"paper_id": "C04-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a machine learning approach to bare sluice disambiguation in dialogue. We extract a set of heuristic principles from a corpus-based sample and formulate them as probabilistic Horn clauses. We then use the predicates of such clauses to create a set of domain independent features to annotate an input dataset, and run two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based system. Both learners perform well, yielding similar success rates of approx 90%. The results show that the features in terms of which we formulate our heuristic principles have significant predictive power, and that rules that closely resemble our Horn clauses can be learnt automatically from these features. (1) a. Cassie: I know someone who's a good kisser. Catherine: Who? [KP4, 512] 1 b. Sue: You were getting a real panic then.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The phenomenon of sluicing-bare wh-phrases that exhibit a sentential meaning-constitutes an empirically important construction which has been understudied from both theoretical and computational perspectives. Most theoretical analyses (e.g. (Ross, 1969; Chung et al., 1995) ), focus on embedded sluices considered out of any dialogue context. They rarely look at direct sluices-sluices used in queries to request further elucidation of quantified parameters (e.g. (1a)). With a few isolated exceptions, these analyses also ignore a class of uses we refer to (following (Ginzburg and Sag, 2001 ) (G&S)) as reprise sluices. These are used to request clarification of reference of a constituent in a partially understood utterance, as in (1b).",
"cite_spans": [
{
"start": 241,
"end": 253,
"text": "(Ross, 1969;",
"ref_id": "BIBREF12"
},
{
"start": 254,
"end": 273,
"text": "Chung et al., 1995)",
"ref_id": "BIBREF3"
},
{
"start": 569,
"end": 592,
"text": "(Ginzburg and Sag, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our corpus investigation shows that the combined set of direct and reprise sluices constitutes more than 75% of all sluices in the British National Corpus (BNC). In fact, they make up approx. 33% of all wh-queries in the BNC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In previous work (Fern\u00e1ndez et al., to appear), we implemented G&S's analysis of direct sluices as part of an interpretation module in a dialogue system. In this paper we apply machine learning techniques to extract rules for sluice classification in dialogue.",
"cite_spans": [
{
"start": 17,
"end": 35,
"text": "(Fern\u00e1ndez et al.,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2 we present our corpus study of classifying sluices into dialogue types and discuss the methodology we used in this study. Section 3 analyses the distribution patterns we identify and considers possible explanations for these patterns. In Section 4 we identify a number of heuristic principles for classifying each sluice dialogue type and formulate these principles as probability weighted Horn clauses. In Section 5, we then use the predicates of these clauses as features to annotate our corpus samples of sluices, and run two machine learning algorithms on these data sets. The first machine learner used, SLIPPER, extracts optimised rules for identifying sluice dialogue types that closely resemble our Horn clause principles. The second, TiMBL, uses a memory-based machine learning procedure to classify a sluice by generalising over similar environments in which the sluice occurs in a training set. Both algorithms performed well, yielding similar success rates of approximately 90%. This suggests that the features in terms of which we formulated our heuristic principles for classifying sluices were well motivated, and both learning algorithms that we used are well suited to the task of dialogue act classification for fragments on the basis of these features. We finally present our conclusions and future work in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our corpus-based investigation of bare sluices has been performed using the \u223c10 million word dialogue transcripts of the BNC. The corpus of bare sluices has been constructed using SCoRE (Purver, 2001 ), a tool that allows one to search the BNC using regular expressions.",
"cite_spans": [
{
"start": 186,
"end": 199,
"text": "(Purver, 2001",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "2.1"
},
{
"text": "The dialogue transcripts of the BNC contain 5183 bare sluices (i.e. 5183 sentences consisting of just a wh-word). We distinguish between the following classes of bare sluices: what, who, when, where, why, how and which. Given that only 15 bare which were found, we have also considered sluices of the form which N. Including which N, the corpus contains a total of 5343 sluices, whose distribution is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 417,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "2.1"
},
{
"text": "The annotation was performed on two different samples of sluices extracted from the total found in the dialogue transcripts of the BNC. The samples were created by arbitrarily selecting 50 sluices of each class (15 in the case of which). The first sample included all instances of bare how and bare which found, making up a total of 365 sluices. The second sample contained 50 instances of the remaining classes, making up a total of 300 sluices. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "2.1"
},
{
"text": "To classify the sluices in the first sample of our sub-corpus we used the categories described below. The classification was done by 3 expert annotators (the authors) independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "Direct The utterer of the sluice understands the antecedent of the sluice without difficulty. The sluice queries for additional information that was explicitly or implicitly quantified away in the previous utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "(2) Caroline: I'm leaving this school.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "Lyne: When? [KP3, 538] Reprise The utterer of the sluice cannot understand some aspect of the previous utterance which the previous (or possibly not directly previous) speaker assumed as presupposed (typically a contextual parameter, except for why, where the relevant \"parameter\" is something like speaker intention or speaker justification).",
"cite_spans": [
{
"start": 12,
"end": 17,
"text": "[KP3,",
"ref_id": null
},
{
"start": 18,
"end": 22,
"text": "538]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "(3) Geoffrey: What a useless fairy he was. Susan: Who? [KCT, 1753] Clarification The sluice is used to ask for clarification about the previous utterance as a whole.",
"cite_spans": [
{
"start": 55,
"end": 66,
"text": "[KCT, 1753]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "(4) June: Only wanted a couple weeks. Ada: What? [KB1, 3312] Unclear It is difficult to understand what content the sluice conveys, possibly because the input is too poor to make a decision as to its resolution, as in the following example:",
"cite_spans": [
{
"start": 49,
"end": 54,
"text": "[KB1,",
"ref_id": null
},
{
"start": 55,
"end": 60,
"text": "3312]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "(5) Unknown : <unclear> <pause> Josephine: Why? [KCN, 5007] After annotating the first sample, we decided to add a new category to the above set. The sluices in the second sample were classified according to a set of five categories, including the following: Wh-anaphor The antecedent of the sluice is a wh-phrase. ",
"cite_spans": [
{
"start": 48,
"end": 53,
"text": "[KCN,",
"ref_id": null
},
{
"start": 54,
"end": 59,
"text": "5007]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Annotation Procedure",
"sec_num": "2.2"
},
{
"text": "To evaluate the reliability of the annotation, we use the kappa coefficient (K) (Carletta, 1996) , which measures pairwise agreement between a set of coders making category judgements, correcting for expected chance agreement. 2 The agreement on the coding of the first sample of sluices was moderate (K = 52). 3 There were important differences amongst sluice classes: The lowest agreement was on the annotation for why (K = 29), what (K = 32) and how (K = 32), which suggests that these categories are highly ambiguous. Examination of the coincidence matrices shows that the largest confusions were between reprise and clarification in the case of what, and between direct and reprise for why and how. On the other hand, the agreement on classifying who was substantially higher (K = 71), with some disagreements between direct and reprise.",
"cite_spans": [
{
"start": 80,
"end": 96,
"text": "(Carletta, 1996)",
"ref_id": "BIBREF2"
},
{
"start": 227,
"end": 228,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability",
"sec_num": "2.3"
},
{
"text": "Agreement on the annotation of the 2nd sample was considerably higher although still not entirely convincing (K = 61). Overall agreement was improved in all classes, except for where and who. Agreement on what improved slightly (K = 39), and it was substantially higher on why (K = 52), when (K = 62) and which N (K = 64). Discussion Although the three coders may be considered experts, their training and familiarity with the data were not equal. This resulted in systematic differences in their annotations. Two of the coders (coder 1 and coder 2) had worked more extensively with the BNC dialogue transcripts and, crucially, with the definition of the categories to be applied. Leaving coder 3 out of the coder pool increases agreement very significantly: K = 70 in the first sample, and K = 71 in the second one. The agreement reached by the more expert pair of coders was high and stable. It provides a solid foundation for the current classification. It also indicates that it is not difficult to increase annotation agreement by relatively light training of coders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reliability",
"sec_num": "2.3"
},
{
"text": "In this section we report the results obtained from the corpus study described in Section 2. The study shows that the distribution of readings is significantly different for each class of sluice. Subsection 3.2 outlines a possible explanation of such distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results: Distribution Patterns",
"sec_num": "3"
},
{
"text": "The distribution of interpretations for each class of sluice is shown in Table 2 . The distributions are presented as percentages of pairwise agreement (i.e. agreement between pairs of coders), leaving aside the unclear cases. This allows us to see the proportion made up by each interpretation for each sluice class, together with any correlations between sluice and interpretation. Distributions are similar over both samples, suggesting that corpus size is large enough to permit the identification of repeatable patterns. Table 2 reveals interesting correlations between sluice classes and preferred interpretations. The most common interpretation for what is clarification, making up 69% in the first sample and 66% in the second one. Why sluices have a tendency to be direct (57%, 83%). The sluices with the highest probability of being reprise are who (76%, 95%), which (96%), which N (88%, 80%) and where (75%, 69%). On the other hand, when (67%, 65%) and how (87%) have a clear preference for direct interpretations. ",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 526,
"end": 533,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Sluice/Interpretation Correlations",
"sec_num": "3.1"
},
{
"text": "In order to gain a complete perspective on sluice distribution in the BNC, it is appropriate to combine the (averaged) percentages in Table 2 with the absolute number of sluices contained in the BNC (see Table 1 ), as displayed in For instance, although more than 70% of why sluices are direct, the absolute number of why sluices that are reprise exceeds the total number of when sluices by almost 3 to 1. Explicating the distribution in Table 3 is important in order to be able to understand among other issues whether we would expect a similar distribution to occur in a Spanish or Mandarin dialogue corpus; similarly, whether one would expect this distribution to be replicated across different domains. Here we restrict ourselves to sketching an explanation of a couple of striking patterns exhibited in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 141,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 204,
"end": 211,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 438,
"end": 445,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 808,
"end": 815,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "One such pattern is the low frequency of when sluices, particularly by comparison with what one might expect to be its close cousin-where; indeed the direct/reprise splits are almost mirror images for when v. where. Another very notable pattern, alluded to above, is the high frequency of why sluices. 4 The when v. where contrast provides one argument against (7), which is probably the null hypothesis w/r to the distribution of reprise sluices:",
"cite_spans": [
{
"start": 302,
"end": 303,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "(7) Frequency of antecedent hypothesis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "The frequency of a class of reprise sluices is directly correlated with the frequency of the class of its possible antecedents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "Clearly locative expressions do not outnumber temporal ones and certainly not by the proportion the data in Table 3 would require to maintain (7). 5 (Purver, 2004) provides additional data related to this-clarification requests of all types in the BNC that pertain to nominal antecedents outnumber such CRs that relate to verbal antecedents by 40:1, which does not correlate with the relative frequency of nominal v. verbal antecedents (about 1.3:1).",
"cite_spans": [
{
"start": 149,
"end": 163,
"text": "(Purver, 2004)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "A more refined hypothesis, which at present we can only state quite informally, is (8):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "(8) Ease of grounding of antecedent hypothesis: The frequency of a class of reprise sluices is directly correlated with the ease with which the class of its possible antecedents can be grounded (in the sense of (Clark, 1996; Traum, 1994) ).",
"cite_spans": [
{
"start": 211,
"end": 224,
"text": "(Clark, 1996;",
"ref_id": "BIBREF4"
},
{
"start": 225,
"end": 237,
"text": "Traum, 1994)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "This latter hypothesis offers a route towards explaining the when v. where contrast. There are two factors at least which make grounding a temporal parameter significantly easier on the whole than grounding a locative parameter. The first factor is that conversationalists typically share a temporal ontology based on a clock and/or calendar. Although well structured locative ontologies do exist (e.g. grid points in a map), they are far less likely to be common currency. The natural ordering of clock/calendarbased ontologies reflected in grammatical devices such as sequence of tense is a second factor that favours temporal parameters over locatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "From this perspective, the high frequency of why reprises is not surprising. Such reprises query either the justification for an antecedent assertion or the goal of an antecedent query. Speakers usually do not specify these explicitly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "In fact, what requires explanation is why such 5 A rough estimate concerning the BNC can be extracted by counting the words that occur more than 1000 times. Of these approx 35k tokens are locative in nature and could serve as antecedents of where; the corresponding number for temporal expressions and when yields approx 80k tokens. These numbers are derived from a frequency list (Kilgarriff, 1998) of the demographic portion of the BNC.",
"cite_spans": [
{
"start": 381,
"end": 399,
"text": "(Kilgarriff, 1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "reprises do not occur even more frequently than they actually do. To account for this, one has to appeal to considerations of the importance of anchoring a contextual parameter. 6 A detailed explication of the distribution shown in Table 3 requires a detailed model of dialogue interaction. We have limited ourselves to suggesting that the distribution can be explicated on the basis of some quite general principles that regulate grounding.",
"cite_spans": [
{
"start": 178,
"end": 179,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Explaining the Frequency Hierarchy",
"sec_num": "3.2"
},
{
"text": "In this section we informally describe a set of heuristics for assigning an interpretation to bare sluices. In subsection 4.2, we show how our heuristics can be formalised as probabilistic sluice typing constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Heuristics for sluice disambiguation",
"sec_num": "4"
},
{
"text": "To maximise accuracy we have restricted ourselves to cases of three-way agreement among the three coders when considering the distribution patterns from which we obtained our heuristics. Looking at these patters we have arrived at the following general principles for resolving bare sluice types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "What The most likely interpretation is clarification. This seems to be the case when the antecedent utterance is a fragment, or when there is no linguistic antecedent. Reprise interpretations also provide a significant proportion (about 23%). If there is a pronoun (matching the appropriate semantic constraints) in the antecedent utterance, then the preferred interpretation is reprise:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "(9) Andy: I don't know how to do it. Nick: What? Garlic bread? [KPR, 1763] Why The interpretation of why sluices tends to be direct. However, if the antecedent is a non-declarative utterance, or a negative declarative, the sluice is likely to be a reprise. Who Sluices of this form show a very strong preference for reprise interpretation. In the majority of cases, the antecedent is either a proper name (11) , or a personal pronoun.",
"cite_spans": [
{
"start": 63,
"end": 74,
"text": "[KPR, 1763]",
"ref_id": null
},
{
"start": 405,
"end": 409,
"text": "(11)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "Which/Which N Both sorts of sluices exhibit a strong tendency to reprise. In the overwhelming majority of reprise cases for both which and which N, the antecedent is a definite description like 'the button' in (12).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "(12) Arthur: You press the button. June: Which one? [KSS, 144] Where The most likely interpretation of where sluices is reprise. In about 70% of the reprise cases, the antecedent of the sluice is a deictic locative pronoun like 'there' or 'here'. Direct interpretations are preferred when the antecedent utterance is declarative with no overt spatial location expression.",
"cite_spans": [
{
"start": 52,
"end": 57,
"text": "[KSS,",
"ref_id": null
},
{
"start": 58,
"end": 62,
"text": "144]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "(13) Pat: You may find something in there actually. Carole: Where? [KBH, 1817] When If the antecedent utterance is a declarative and there is no time-denoting expression other than tense, the sluice will be interpreted as direct, as in example 14. On the other hand, deictic temporal expressions like 'then' trigger reprise interpretations. ",
"cite_spans": [
{
"start": 67,
"end": 78,
"text": "[KBH, 1817]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Description of the heuristics",
"sec_num": "4.1"
},
{
"text": "The problem we are addressing is typing of bare sluice tokens in dialogue. This problem is analogous to part-of-speech tagging, or to dialogue act classification. We formulate our typing constraints as Horn clauses to achieve the most general and declarative expression of these conditions. The antecedent of a constraint uses predicates corresponding to dialogue relations, syntactic properties, and lexical content. The predicate of the consequent represents a sluice typing tag, which corresponds to a maximal type in the HPSG grammar that we used in implementing our dialogue system. Note that these constraints cannot be formulated at the level of the lexical entries of the wh-words since these distributions are specific to sluicing and not to non-elliptical wh-interrogatives. 7 As a first example, consider the following rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Constraints",
"sec_num": "4.2"
},
{
"text": "sluice(x), where(x),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Constraints",
"sec_num": "4.2"
},
{
"text": "ant utt(y,x), contains(y,'there') \u2192 reprise(x) [.78] This rule states that if x is a sluice construction with lexical head where, and its antecedent utterance (identified with the latest move in the dialogue) contains the word 'there', then x is a reprise sluice. Note that, as in a probabilistic context-free grammar (Booth, 1969) , the rule is assigned a conditional probability. In the example above, .78 is the probability that the context described in the antecedent of the clause produces the interpretation specified in the consequent. 8 The following three rules are concerned with the disambiguation of why sluice readings. The structure of the rules is the same as before. In this case however, the disambiguation is based on syntactic and semantic properties of the antecedent utterance as a whole (like polarity or mood), instead of focusing on a particular lexical item contained in such utterance. ",
"cite_spans": [
{
"start": 47,
"end": 52,
"text": "[.78]",
"ref_id": null
},
{
"start": 318,
"end": 331,
"text": "(Booth, 1969)",
"ref_id": "BIBREF1"
},
{
"start": 543,
"end": 544,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Constraints",
"sec_num": "4.2"
},
{
"text": "To evaluate our heuristics, we applied machine learning techniques to our corpus data. Our aim was to evaluate the predictive power of the features observed and to test whether the intuitive constraints formulated in the form of Horn clause rules could be learnt automatically from these features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying Machine Learning",
"sec_num": "5"
},
{
"text": "We use a rule-based learning algorithm called SLIPPER (for Simple Learner with Iterative Pruning to Produce Error Reduction). SLIP-PER (Cohen and Singer, 1999) combines the separate-and-conquer approach used by most rule learners with confidence-rated boosting to create a compact rule set.",
"cite_spans": [
{
"start": 135,
"end": 159,
"text": "(Cohen and Singer, 1999)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SLIPPER",
"sec_num": "5.1"
},
{
"text": "The output of SLIPPER is a weighted rule set, in which each rule is associated with a confidence level. The rule builder is used to find a rule set that separates each class from the remaining classes using growing and pruning techniques. To classify an instance x, one computes the sum of the confidences that cover x: if the sum is greater than zero, the positive class is predicted. For each class, the only rule with a negative confidence rating is a single default rule, which predicts membership in the remaining classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLIPPER",
"sec_num": "5.1"
},
{
"text": "We decided to use SLIPPER for two main reasons: (1) it generates transparent, relatively compact rule sets that can provide interesting insights into the data, and (2) its if-then rules closely resemble our Horn clause constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SLIPPER",
"sec_num": "5.1"
},
{
"text": "To generate the input data we took all threeway agreement instances plus those instances where there is agreement between coder 1 and coder 2, leaving out cases classified as unclear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We reclassified 9 instances in the first sample as wh-anaphor, and also included these data. 9 The total data set includes 351 datapoints. These were annotated according to the set of features shown in Table 4 . Table 4 : Features We use a total of 11 features. All features are nominal. Except for the sluice feature that indicates the sluice type, they are all boolean, i.e. they can take as value either yes or no. The features mood, polarity and frag refer to syntactic and semantic properties of the antecedent utterance as a whole. The remaining features, on the other hand, focus on a particular lexical item or construction contained in such utterance. They will take yes as a value if this element or construction exists and, it matches the semantic restrictions imposed by the sluice type. The feature wh will take a yes value only if there is a wh-word that is identical to the sluice type. Unknown or irrelevant values are indicated by a question mark. This allows us to express, for instance, that the presence of a proper name is irrelevant to determine the interpretation of a where sluice, while it is crucial when the sluice type is who. The feature overt takes no as value when there is no overt antecedent expression. It takes yes when there is an antecedent expression not captured by any other feature, and it is considered irrelevant (question mark value) when there is an antecedent expression defined by another feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 4",
"ref_id": null
},
{
"start": 212,
"end": 219,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We performed a 10-fold cross-validation on the total data set, obtaining an average success rate of 90.32%. Using leave-one-out cross-validation we obtained an average success rate of 84.05%. For the holdout method, we held over 100 instances as a testing data, and used the reminder (251 datapoints) for training. This yielded a success rate of 90%. Recall, precision and f-measure values are reported in Table 5 Using the holdout procedure, SLIPPER generated a set of 23 rules: 4 for direct, 13 for reprise, 1 for clarification and 1 for wh-anaphor, plus 4 default rules, one for each class. All features are used except for frag, which indicates that this feature does not play a significant role in determining the correct reading. The following rules are part of the rule set generated by SLIPPER:",
"cite_spans": [],
"ref_spans": [
{
"start": 406,
"end": 413,
"text": "Table 5",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": "5.3"
},
{
"text": "direct not reprise|clarification|wh anaphor :-overt=no, polarity=pos (+1.06296)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": "5.3"
},
{
"text": "reprise not direct|clarification|wh anaphor :-deictic=yes (+3.31703)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": "5.3"
},
{
"text": "reprise not direct|clarification|wh anaphor :-mood=non decl, sluice=why (+1.66429)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": "5.3"
},
{
"text": "Although SLIPPER seems to be especially well suited for the task at hand, we decided to run a different learning algorithm on the same training and testing data sets and compare the results obtained. For this experiment we used TiMBL, a memory-based learning algorithm developed at Tilburg University (Daelemans et al., 2003) . As with all memory-based machine learners, TiMBL stores representations of instances from the training set explicitly in memory. In the prediction phase, the similarity between a new test instance and all examples in memory is computed using some distance metric. The system will assign the most frequent category within the set of most similar examples (the k-nearest neighbours). As a distance metric we used information-gain feature weighting, which weights each feature according to the amount of information it contributes to the correct class label. The results obtained are very similar to the previous ones. TiMBL yields a success rate of 89%. Recall, precision and f-measure values are shown in Table 6 . As expected, the feature that received a lowest weighting was frag. ",
"cite_spans": [
{
"start": 301,
"end": 325,
"text": "(Daelemans et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 1032,
"end": 1039,
"text": "Table 6",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Comparing SLIPPER and TiMBL",
"sec_num": "5.4"
},
{
"text": "In this paper we have presented a machine learning approach to bare sluice classification in dialogue using corpus-based empirical data. From these data, we have extracted a set of heuristic principles for sluice disambiguation and formulated such principles as probability weighted Horn clauses. We have then used the predicates of these clauses as features to annotate an input dataset, and ran two different machine learning algorithms: SLIPPER, a rule-based learning algorithm, and TiMBL, a memory-based learning system. SLIPPER has the advantage of generating transparent rules that closely resemble our Horn clause constraints. Both algorithms, however, perform well, yielding to similar success rates of approximately 90%. This shows that the features we used to formulate our heuristic principles were well motivated, except perhaps for the feature frag, which does not seem to have a signifi-cant predictive power. The two algorithms we used seem to be well suited to the task of sluice classification in dialogue on the basis of these features. In the future we will attempt to construct an automatic procedure for annotating a dialogue corpus with the features presented here, to which both machine learning algorithms apply.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Work",
"sec_num": "6"
},
{
"text": "This notation indicates the British National Corpus file (KP4) and the sluice sentence number (512).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "K = P (A) \u2212 P (E)/1 \u2212 P (E), where P(A) is the proportion of actual agreements and P(E) is the proportion of expected agreement by chance, which depends on the number of relative frequencies of the categories under test. The denominator is the total proportion less the proportion of chance expectation.3 All values are shown as percentages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As we pointed out above, sluices are a common means of asking wh-interrogatives; in the case of whyinterrogatives, this is even stronger-close to 50% of all such interrogatives in the BNC are sluices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Another factor is the existence of default strategies for resolving such parameters, e.g. assuming that the question asked transparently expresses the querier's primary goal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Thus, whereasTable 2shows that approx. 70% of who-sluices are reprise, this is clearly not the case for non-elliptical who-interrogatives. For instance, the KB7 block in the BNC has 33 non-elliptical whointerrogatives. Of these at most 3 serve as reprise utterances.8 These probabilities have been extracted manually from the three-way agreement data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We reclassified those instances that had motivated the introduction of the wh-anaphor category for the second sample. Given that there were no disagreements involving this category, such reclassification was straightforward.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "] then I realised that it was Fennite Katherine: Who?",
"authors": [
{
"first": "",
"middle": [],
"last": "Patrick",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick: [...] then I realised that it was Fennite Katherine: Who? [KCV, 4694] References",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Probabilistic representation of formal languages",
"authors": [
{
"first": "T",
"middle": [],
"last": "Booth",
"suffix": ""
}
],
"year": 1969,
"venue": "IEEE Conference Record of the 1969 Tenth Annual Symposium of Switching and Automata Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Booth. 1969. Probabilistic representation of formal languages. In IEEE Conference Record of the 1969 Tenth Annual Symposium of Switching and Automata Theory.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assessing agreement on classification tasks: the kappa statistics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "249--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carletta. 1996. Assessing agreement on clas- sification tasks: the kappa statistics. Compu- tational Linguistics, 2(22):249-255.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sluicing and logical form",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Ladusaw",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mccloskey",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural Language Semantics",
"volume": "3",
"issue": "",
"pages": "239--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chung, W. Ladusaw, and J. McCloskey. 1995. Sluicing and logical form. Natural Lan- guage Semantics, 3:239-282.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using Language",
"authors": [
{
"first": "H",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. H. Clark. 1996. Using Language. Cambridge University Press, Cambridge.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A simple, fast, and effective rule learner",
"authors": [
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the 16th National Conference on AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Cohen and Y. Singer. 1999. A simple, fast, and effective rule learner. In Proc. of the 16th National Conference on AI.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "TiMBL: Tilburg Memory Based Learner, Reference Guide",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Van Der Sloot",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2003. TiMBL: Tilburg Memory Based Learner, Reference Guide. Technical Report ILK-0310, U. of Tilburg.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SHARDS: Fragment resolution in dialogue",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": null,
"venue": "Computing Meaning",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fern\u00e1ndez, J. Ginzburg, H. Gregory, and S. Lappin. (to appear). SHARDS: Frag- ment resolution in dialogue. In H. Bunt and R. Muskens, editors, Computing Meaning, volume 3. Kluwer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Interrogative Investigations. CSLI Publications",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Sag",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ginzburg and I. Sag. 2001. Interrogative Investigations. CSLI Publications, Stanford, California.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "BNC Database and Word Frequency Lists",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kilgarriff. 1998. BNC Database and Word Frequency Lists. www.itri.bton.ac.uk/ \u223c Adam.Kilgarriff/ bnc-readme.html.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SCoRE: A tool for searching the BNC",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Purver. 2001. SCoRE: A tool for searching the BNC. Technical Report TR-01-07, Dept. of Computer Science, King's College London.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Theory and Use of Clarification in Dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Purver. 2004. The Theory and Use of Clari- fication in Dialogue. Ph.D. thesis, King's Col- lege, London, forthcoming.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Guess who",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ross",
"suffix": ""
}
],
"year": 1969,
"venue": "Proc. of the 5th annual Meeting of the Chicago Linguistics Society",
"volume": "",
"issue": "",
"pages": "252--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ross. 1969. Guess who. In Proc. of the 5th annual Meeting of the Chicago Linguistics So- ciety, pages 252-286, Chicago. CLS.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Computational Theory of Grounding in Natural Language Conversation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Traum. 1994. A Computational Theory of Grounding in Natural Language Conversa- tion. Ph.D. thesis, University of Rochester, Department of Computer Science, Rochester.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "sluice(x), why(x), ant utt(y,x), non decl(y) \u2192 reprise(x) [.93] sluice(x), why(x), ant utt(y,x), pos decl(y) \u2192 direct(x) [.95] sluice(x), why(x), ant utt(y,x), neg decl(y) \u2192 reprise(x) [.40]",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "antecedent utterance polarity polarity of the antecedent utterance frag whether the antecedent utterance is a fragment quant presence of a quantified expression deictic presence of a deictic pronoun proper n presence of a proper name pro presence of a pronoun def desc presence of a definite description wh presence of a wh word overt presence of any other potential antecedent expression",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Total of sluices in the BNC"
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "Distributions as pairwise agr percentages"
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>what cla</td><td>2040</td><td colspan=\"2\">whichNrep 135</td></tr><tr><td>why dir</td><td>775</td><td>when dir</td><td>90</td></tr><tr><td>whatrep</td><td>670</td><td>who dir</td><td>70</td></tr><tr><td>whorep</td><td>410</td><td>where dir</td><td>70</td></tr><tr><td>whyrep</td><td>345</td><td>how dir</td><td>45</td></tr><tr><td>whererep</td><td>250</td><td>whenrep</td><td>35</td></tr><tr><td>what dir</td><td>240</td><td>whichN dir</td><td>24</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">category recall precision f-measure</td></tr><tr><td>direct</td><td>96.67</td><td>85.29</td><td>90.62</td></tr><tr><td>reprise</td><td>88.89</td><td>94.12</td><td>91.43</td></tr><tr><td>clarification</td><td>83.33</td><td>71.44</td><td>76.92</td></tr><tr><td>wh anaphor</td><td>80.00</td><td>100</td><td>88.89</td></tr></table>",
"html": null,
"num": null,
"text": "."
},
"TABREF10": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF12": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "TiMBL -Results"
}
}
}
}