ACL-OCL / Base_JSON /prefixS /json /sigdial /2005.sigdial-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:49:56.010996Z"
},
"title": "Using Machine Learning for Non-Sentential Utterance Classification",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Shalom",
"middle": [],
"last": "Lappin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "King's College London",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we investigate the use of machine learning techniques to classify a wide range of non-sentential utterance types in dialogue, a necessary first step in the interpretation of such fragments. We train different learners on a set of contextual features that can be extracted from PoS information. Our results achieve an 87% weighted f-score-a 25% improvement over a simple rule-based algorithm baseline.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we investigate the use of machine learning techniques to classify a wide range of non-sentential utterance types in dialogue, a necessary first step in the interpretation of such fragments. We train different learners on a set of contextual features that can be extracted from PoS information. Our results achieve an 87% weighted f-score-a 25% improvement over a simple rule-based algorithm baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Non-Sentential Utterances (NSUs)-fragmentary utterances that convey a full sentential meaningare a common phenomenon in spoken dialogue. Because of their elliptical form and their highly context-dependent meaning, NSUs are a challenging problem for both linguistic theories and implemented dialogue systems. Although perhaps the most prototypical NSU type are short answers like (1), recent corpus studies (Fern\u00e1ndez and Ginzburg, 2002; Schlangen, 2003) have shown that other less studied types of fragments-each with its own resolution constraints-are also pervasive in real conversations.",
"cite_spans": [
{
"start": 406,
"end": 436,
"text": "(Fern\u00e1ndez and Ginzburg, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 437,
"end": 453,
"text": "Schlangen, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Kevin: Which sector is the lawyer in?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unknown: Tertiary. [KSN, 1776 [KSN, -1777 Arguably the most important issue in the processing of NSUs concerns their resolution, i.e. the recovery of a full clausal meaning from a form which is incomplete. However, given their elliptical form, NSUs are very often ambiguous. Hence, a necessary first step towards this final goal is the identification of the right NSU type, which will determine the appropriate resolution procedure.",
"cite_spans": [
{
"start": 19,
"end": 29,
"text": "[KSN, 1776",
"ref_id": null
},
{
"start": 30,
"end": 41,
"text": "[KSN, -1777",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the work described in this paper we address this latter issue, namely the classification of NSUs, using a machine learning approach. 2 The techniques we use are similar to those applied by (Fern\u00e1ndez et al., 2004) to disambiguate between the different interpretations of bare wh-phrases. Our investigation, however, takes into account a much broader range of NSU types, providing a wide coverage NSU classification system.",
"cite_spans": [
{
"start": 136,
"end": 137,
"text": "2",
"ref_id": null
},
{
"start": 192,
"end": 216,
"text": "(Fern\u00e1ndez et al., 2004)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We identify a small set of features, easily extractable from PoS information, that capture the contextual properties that are relevant for NSU classification. We then use several machine learners trained on these features to predict the most likely NSU class, achieving an 87% weighted f-score. We evaluate our results against a baseline system that uses an algorithm with four rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. First we introduce the taxonomy of NSU classes we adopt. In section 3 we explain how the empirical data has been collected and which restrictions have been adopted in selecting the data set to be used in our experiments. The features we use to characterise such data, and the generation process of the data set are presented in section 4. Next we introduce some very simple algorithms used to derive a baseline for our NSU classification task, and after that present the machine learners used in our experiments. In section 7 we report the results obtained, evaluate them against the baseline systems, and discuss the results of a second experiment performed on a data set created by dropping one of the restrictions adopted before. Finally, in Section 8, we offer conclusions and some pointers for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both plain affirmative answers and rejections are strongly indicated by lexical material, characterised by the presence of a \"yes\" word (\"yeah\", \"aye\", \"yep\"...) or the negative interjection \"no\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Typically, repeated affirmative answers are responses to polar questions. They answer affirmatively by repeating a fragment of the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Repeated Affirmative Answer",
"sec_num": null
},
{
"text": "(13) A: Did you shout very loud? B: Very loud, yes. [JJW, [571] [572] Helpful Rejection The context of helpful rejections can be either a polar question or an assertion. In the first case, they are negative answers that provide an appropriate alternative (14). As responses to assertions, they correct some piece of information in the previous utterance (15). ",
"cite_spans": [
{
"start": 52,
"end": 57,
"text": "[JJW,",
"ref_id": null
},
{
"start": 58,
"end": 63,
"text": "[571]",
"ref_id": null
},
{
"start": 64,
"end": 69,
"text": "[572]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Repeated Affirmative Answer",
"sec_num": null
},
{
"text": "To generate the data for our experiments, we collected a corpus of NSUs extracted from the dialogue transcripts of the British National Corpus (BNC) (Burnard, 2000) .",
"cite_spans": [
{
"start": 149,
"end": 164,
"text": "(Burnard, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "3"
},
{
"text": "Our corpus of NSUs includes and extends the subcorpus used in (Fern\u00e1ndez and Ginzburg, 2002) . It We found a total of 1285 NSUs. Of these, 1269 were labelled according to the typology presented in the previous section. We also annotated each of these NSUs with the sentence number of its antecedent utterance. The remaining 16 instances did not fall in any of the categories of the taxonomy. They were labelled as 'Other' and were not used in the experiments. The labelling of the entire corpus of NSUs was done by one expert annotator. To assess the reliability of the taxonomy we performed a pilot study with two additional, non-expert annotators. These annotated a total of 50 randomly selected instances (containing a minimum of 2 instances of each NSU class as labelled by the expert annotator) with the classes in the taxonomy. The agreement obtained by the three annotators is reasonably good, yielding a kappa score of 0.76. The non-expert annotators were also asked to identify the antecedent sentence of each NSU. Using the expert annotation as a gold standard, they achieve 96% and 92% accuracy in this task.",
"cite_spans": [
{
"start": 62,
"end": 92,
"text": "(Fern\u00e1ndez and Ginzburg, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "3"
},
{
"text": "The data used in the experiments was selected from our classified corpus of NSUs (1269 instances as labelled by the expert annotator) following two simplifying restrictions. First, we restrict our experi-feature description values nsu cont content of the NSU (either prop or question) p,q wh nsu presence of a wh word in the NSU yes,no aff neg presence of a \"yes\"/\"no\" word in the NSU yes,no,e(mpty) lex presence of different lexical items in the NSU p mod,f mod,mod,conj,e(mpty)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NSU class Total",
"sec_num": null
},
{
"text": "ant mood mood of the antecedent utterance decl,n decl wh ant presence of a wh word in the antecedent yes,no finished (un)finished antecedent fin,unf repeat repeated words in NSU and antecedent 0-3 parallel repeated tag sequences in NSU and antecedent 0-3 Table 2 : Features and values ments to those NSUs whose antecedent is the immediately preceding utterance. This restriction, which makes the feature annotation task easier, does not pose a significant coverage problem, given that the immediately preceding utterance is the antecedent for the vast majority of NSUs (88%). The set of all NSUs classified according to the taxonomy, whose antecedent is the immediately preceding utterance contains a total of 1109 datapoints. Table 1 shows the frequency distribution for NSU classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 2",
"ref_id": null
},
{
"start": 727,
"end": 734,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NSU class Total",
"sec_num": null
},
{
"text": "The second restriction concerns the instances classified as plain acknowledgements. Taking the risk of ending up with a considerably smaller data set, we decided to leave aside this class of feedback NSUs, given that (i) they make up more than 50% of our sub-corpus leading to a data set with very skewed distributions, and (ii) a priori, they seem one of the easiest types to identify (a hypothesis that was confirmed after a second experiment-see below). We therefore exclude plain acknowledgements and concentrate on a more interesting and less skewed data set containing all remaining NSU classes. This makes up a total of 527 data points (1109 \u2212 582). In section 7.3 we will compare the results obtained using this restricted data set with those of a second experiment in which plain acknowledgements are incorporated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NSU class Total",
"sec_num": null
},
{
"text": "In this section we present the features used in our experiments and describe the automatic procedure that we employed to annotate the 527 data points with these features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We identify three types of properties that play an important role in the NSU classification task. The first one has to do with semantic, syntactic and lexical properties of the NSUs themselves. The second one refers to the properties of its antecedent utterance. The third concerns relations between the antecedent and the fragment. Table 2 shows the set of nine features used in our experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "NSU features A set of four features are related to properties of the NSUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "These are nsu cont,wh nsu,aff neg and lex. We expect the feature nsu cont to distinguish between question-denoting and proposition-denoting NSUs. The feature wh nsu is primarily introduced to identify Sluices. The features aff neg and lex signal the presence of particular lexical items. They include a value (e)mpty which allows us to encode the absence of the relevant lexical items as well. We expect these features to be crucial to the identification of Affirmative Answers and Rejection on the one hand, and Propositional Modifiers, Factual Modifiers, Bare Modifier Phrases and Conjunction + fragment NSUs on the other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "Note that the feature lex could be split into four binary features, one for each of its non-empty values. We have experimented with this option as well, and the results obtained are virtually the same. We therefore opt for a more compact set of features. This also applies to the feature aff neg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4.1"
},
{
"text": "We use the features ant mood,wh ant, and finished to encode properties of the antecedent utterance. The presence of a wh-phrase in the antecedent seems to be the best cue for classifying Short Answers. We expect the feature finished to help the learners identify Fillers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Antecedent features",
"sec_num": null
},
{
"text": "The last two features, repeat and parallel, encode similarity relations between the NSU and its antecedent utterance. They are the only numerical features in our feature set. The feature repeat is introduced as a clue to identify Repeated Affirmative Answers and Repeated Acknowledgements. The feature parallel is intended to capture the particular parallelism exhibited by Helpful Rejections. It signals the presence of sequences of PoS tags common to the NSU and its antecedent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity features",
"sec_num": null
},
{
"text": "Our feature annotation procedure is similar to the one used in (Fern\u00e1ndez et al., 2004) , which exploits the SGML markup of the BNC. All feature values are extracted automatically using the PoS information encoded in the BNC markup. The BNC was automatically annotated with a set of 57 PoS codes (known as the C5 tagset), plus 4 codes for punctuation tags, using the CLAWS system (Garside, 1987 ).",
"cite_spans": [
{
"start": 63,
"end": 87,
"text": "(Fern\u00e1ndez et al., 2004)",
"ref_id": "BIBREF6"
},
{
"start": 380,
"end": 394,
"text": "(Garside, 1987",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data generation",
"sec_num": "4.2"
},
{
"text": "Some of our features, like nsu cont and ant mood, for instance, are high level features that do not have straightforward correlates in PoS tags. Punctuation tags (that would correspond to intonation patterns in a spoken dialogue system) help to extract the values of these features, but the correspondence is still not unique. For this reason we evaluate our automatic feature annotation procedure against a small sample of manually annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data generation",
"sec_num": "4.2"
},
{
"text": "We randomly selected 10% of our dataset (52 instances) and extracted the feature values manually. In comparison with this gold standard, our automatic feature annotation procedure achieves 89% accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data generation",
"sec_num": "4.2"
},
{
"text": "We use only automatically annotated data for the learning experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data generation",
"sec_num": "4.2"
},
{
"text": "The simplest baseline we can consider is to always predict the majority class in the data, in our case Short Answer. This yields a 6.7% weighted f-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "A slightly more interesting baseline can be obtained by using a one-rule classifier. It chooses the feature which produces the minimum error. This creates a single rule which generates a decision tree where the root is the chosen feature and the branches correspond to its different values. The leaves are then associated with the class that occurs most often in the data, for which that value holds. We use the implementation of a one-rule classifier provided in the Weka toolkit (Witten and Frank, 2000) .",
"cite_spans": [
{
"start": 481,
"end": 505,
"text": "(Witten and Frank, 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "In our case, the feature with the minimum error is aff neg. It produces the following one-rule decision tree, which yields a 32.5% weighted f-score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "aff neg:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "yes -> AffAns no -> Reject e -> ShortAns Figure 1 : One-rule tree",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 49,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "Finally, we consider a more substantial baseline that uses the combination of four features that produces the best results. The four-rule tree is constructed by running the J4.8 classifier (Weka's implementation of the C4.5 system (Quinlan, 1993) ) with all features and extracting only the four first features from the root of the tree, which interestingly are all NSU features. This creates a decision tree with four rules, one for each feature used, and nine leaves corresponding to nine different NSU classes. Tables 3, 4 and 5, respectively. All results reported were obtained by performing 10fold cross-validation. The results (here and in the sequel) are presented as follows: The tables show the recall, precision and f-measure for each class. To calculate the overall performance of the algorithm, we normalise those scores according to the relative frequency of each class. This is done by multiplying each score by the total of instances of the corresponding class and then dividing by the total number of datapoints in the data set. The weighted overall recall, precision and f-measure, shown in the bottom row of the tables, is then the sum of the corresponding weighted scores.",
"cite_spans": [
{
"start": 231,
"end": 246,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 514,
"end": 525,
"text": "Tables 3, 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Baseline",
"sec_num": "5"
},
{
"text": "ShortAns 100.00 20.10 33.50",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NSU class recall prec f1",
"sec_num": null
},
{
"text": "Weighted Score 19.92 4.00 6.67 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NSU class recall prec f1",
"sec_num": null
},
{
"text": "We use three different machine learners, which implement three different learning strategies: SLIP-PER, a rule induction system presented in (Cohen and Singer, 1999) ; TiMBL, a memory-based algorithm created by (Daelemans et al., 2003) ; and Max-Ent, a maximum entropy algorithm developed by Zhang Le (Le, 2003) . They are all well established, freely available systems.",
"cite_spans": [
{
"start": 141,
"end": 165,
"text": "(Cohen and Singer, 1999)",
"ref_id": "BIBREF2"
},
{
"start": 211,
"end": 235,
"text": "(Daelemans et al., 2003)",
"ref_id": "BIBREF4"
},
{
"start": 301,
"end": 311,
"text": "(Le, 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learners",
"sec_num": "6"
},
{
"text": "SLIPPER As in the case of Weka's J4.8, SLIP-PER is based on the popular C4.5 decision tree algorithm. SLIPPER improves this algorithm by using iterative pruning and confidence-rated boosting to create a weighted rule set. We use SLIPPER's option unordered, which finds a rule set that separates each class from the remaining classes, giving rules for each class. This yields slightly better results than the default setting. Unfortunately, it is not possible to access the output rule set when crossvalidation is performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learners",
"sec_num": "6"
},
{
"text": "TiMBL As with all memory-based learning algorithms, TiMBL computes the similarity between a new test instance and the training instances stored in memory using a distance metric. As a distance metric we use the modified value difference metric, which performs better than the default setting (overlap metric). In light of recent studies (Daelemans and Hoste, 2002) , it is likely that the performance of TiMBL could be improved by a more systematic optimisation of its parameters, as e.g. in the experiments presented in (Gabsil and Lemon, 2004) . Here we only optimise the distance metric parameter and keep the default settings for the number of nearest neighbours and feature weighting method.",
"cite_spans": [
{
"start": 337,
"end": 364,
"text": "(Daelemans and Hoste, 2002)",
"ref_id": "BIBREF3"
},
{
"start": 521,
"end": 545,
"text": "(Gabsil and Lemon, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learners",
"sec_num": "6"
},
{
"text": "MaxEnt Finally, we experiment with a maximum entropy algorithm, which computes the model with the highest entropy of all models that satisfy the constraints provided by the features. The maximum entropy toolkit we use allows for several options. In our experiments we use 40 iterations of the default L-BFGS parameter estimation (Malouf, 2002) .",
"cite_spans": [
{
"start": 329,
"end": 343,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Learners",
"sec_num": "6"
},
{
"text": "Although the classification algorithms implement different machine learning techniques, they all yield very similar results: around an 87% weighted fscore. The maximum entropy model performs best, although the difference between its results and those of the other algorithms is not statistically significant. Detailed recall, precision and f-measure scores are shown in Appendix I (Tables 8, 9 and 10).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results: Evaluation and Discussion",
"sec_num": "7"
},
{
"text": "The four-rule baseline algorithm discussed in section 5 yields a 62.33% weighted f-score. Our best result, the 87.75% weighted f-score obtained with the maximal entropy model, shows a 25.42% improvement over the baseline system. A comparison of the scores obtained with the different baselines considered and all learners used is given in It is interesting to note that the four-rule baseline achieves very high f-scores with Sluices and CEaround 97% (see Table 5 ). Such results are not improved upon by the more sophisticated learners. This indicates that the features nsu cont and wh nsu used in the four-rule tree (figure 2) are sufficient indicators to classify question-denoting NSUs. The same applies to the classes Propositional Modifier and Factual Modifier. The baseline already gives f-scores of 100%. This is in fact not surprising, given that the disambiguation of these categories is tied to the presence of particular lexical items that are relatively easy to identify. Affirmative Answers and Short Answers achieve high recall scores with the baseline systems (more than 90%). In the three baselines considered, Short Answer acts as the default category. Therefore, even though the recall is high (given that Short Answer is the class with the highest probability), precision tends to be quite low. By using features that help to identify other categories with the machine learners we are able to improve the precision for Short Answers by around 36%, and the precision of the overall classification system by almost 33%-from 55.90% weighted precision obtained with the fourrule baseline, to the 88.41% achieved with the maximum entropy model.",
"cite_spans": [],
"ref_spans": [
{
"start": 456,
"end": 463,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Comparison with the baseline",
"sec_num": "7.1"
},
{
"text": "The class with the lowest scores is clearly Helpful Rejection. TiMBL achieves a 39.92% f-measure for this class. The maximal entropy model, however, yields only a 10.37% f-measure. Examination of the confusion matrices shows that \u223c27% of Help Rejections were classified as Rejections, \u223c15% as Repeated Acknowledgements, and \u223c26% as Short Answers. This indicates that the feature parallel, introduced to identify this type of NSUs, is not a good enough cue. Whether similar techniques to the ones used e.g. in (Poesio et al., 2004; Schlangen, 2005) to compute semantic similarity could be used here to derive a notion of semantic contrast that would complement this structural feature is an issue that requires further investigation.",
"cite_spans": [
{
"start": 509,
"end": 530,
"text": "(Poesio et al., 2004;",
"ref_id": "BIBREF12"
},
{
"start": 531,
"end": 547,
"text": "Schlangen, 2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis: some comments",
"sec_num": "7.2"
},
{
"text": "As explained in section 3, the data set used in the experiments reported in the previous sections excluded plain acknowledgements. The fact that plain acknowledgements are the category with the highest probability in the sub-corpus (making up more than 50% of our total data), and that they do not seem particularly difficult to identify could affect the performance of the learners by inflating the results. Therefore we left them out in order to work with a more balanced data set and to minimise the potential for misleading results. In a second experiment we incorporated plain acknowledgements to measure their effect on the results. In this section we discuss the results obtained and compare them with the ones achieved in the initial experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "To generate the annotated data set an additional value ack was added to the feature aff neg. This value is invoked to encode the presence of expressions typically used in plain acknowledgements (\"mhm\", \"aha\", \"right\", etc.). The total data set (1109 data points) was automatically annotated with the features modified in this way by means of the procedure described in section 4.2. The three machine learners were then run on the annotated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "As in our first experiment the results obtained are very similar across learners. All systems yield around an 89% weighted f-score, a slightly higher result than the one obtained in the previous experiment. Detailed scores for each class are shown in Appendix II (Tables 11, 12 and 13) . As expected, the class Plain Acknowledgement obtains a high fscore (95%). This, combined with its high probability, raises the overall performance a couple of points (from \u223c87% to \u223c89% weighted f-score). The improvement with respect to the baselines, however, is not as large: a simple majority class baseline already yields 36.28% weighted f-score. Table 7 : Comparison of w. f-scores -with ack.",
"cite_spans": [
{
"start": 263,
"end": 274,
"text": "(Tables 11,",
"ref_id": null
},
{
"start": 275,
"end": 285,
"text": "12 and 13)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 638,
"end": 645,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "The feature with the minimum error used to derived the one-rule baseline is again aff neg, this time with the new value ack as part of its possible values (see figure 3 below). The one-rule baseline yields a weighted f-score of 54.26%, while the four-rule baseline goes up to a weighted f-score of 68.38%. 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "aff neg: yes -> Ack ack -> Ack no -> Reject e -> ShortAns Figure 3 : One-rule tree -with ack.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "In general the results obtained when plain acknowledgements are added to the data set are very similar to the ones achieved before. Note however that even though the overall performance of the algorithms is slightly higher than before (due to the reasons mentioned above), the scores for some NSU classes are actually lower. The most striking case is the class Affirmative Answer, which in TiMBL goes down more than 10 points (from 93.61% to 82.42% fscore-see Tables 9 and 12 in the appendices). The tree in figure 3 provides a clue to the reason for this. When the NSU contains a \"yes\" word (first branch of the tree) the class with the highest probability is now Plain Acknowledgement, instead of Affirmative Answer as before. This is due to the fact that, at least in English, expressions like e.g. \"yeah\" (considered here as \"yes\" words) are potentially ambiguous between acknowledgements and affirmative answers. 5 This ambiguity and the problems it entails are also noted by (Schlangen, 2005) , who addresses the problem of identifying NSUs automatically. As he points out, the ambiguity of \"yes\" words is one of the difficulties encountered when trying to distinguish between backchannels (plain acknowledgements in our taxonomy) and non-backchannel fragments. This is a tricky problem for Schlangen as his fragment identification procedure does not have access to the context. Although we do use features that capture contextual information, determining whether the antecedent utterance is declarative or interrogative (which one would expect to be the best clue to disambiguate between Plain Acknowledgement and Affirmative Answer) is not always trivial.",
"cite_spans": [
{
"start": 918,
"end": 919,
"text": "5",
"ref_id": null
},
{
"start": 981,
"end": 998,
"text": "(Schlangen, 2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 460,
"end": 475,
"text": "Tables 9 and 12",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Incorporating plain acknowledgements",
"sec_num": "7.3"
},
{
"text": "We have presented a machine learning approach to the problem of Non-Sentential Utterance (NSU) classification in dialogue. We have described a procedure for predicting the appropriate NSU class from a fine-grained typology of NSUs derived from a corpus study performed on the BNC, using a set of automatically annotated features. We have employed a series of simple baseline methods for classifying NSUs. The most successful of these methods uses a decision tree with four rules and gives a weighted f-score of 62.33%. We then applied three machine learning algorithms to a data set which includes all NSU classes except Plain Acknowledgement and obtained a weighted f-score of approx-imated 87% for all of them. This improvement, taken together with the fact that the three algorithms achieve very similar results suggests that our features offer a reasonable basis for machine learning acquisition of the typology adopted. However, some features like parallel, introduced to account for Help Rejections, are in need of considerable improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "In a second experiment we incorporated plain acknowledgements in the data set and ran the machine learners on it. The results are very similar to the ones achieved in the previous experiment, if slightly higher due to the high probability of this class. The experiment does show though a potential confusion between plain acknowledgements and affirmative answers that did not show up in the previous experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "In future work we will integrate our NSU classification techniques into an Information State-based dialogue system (based on SHARDS (Fern\u00e1ndez et al., to appear) and CLARIE (Purver, 2004) ), that assigns a full sentential reading to fragment phrases in dialogue. This will require a refinement of our feature extraction procedure, which will not be restricted solely to PoS input, but will also benefit from other information generated by the system, such as dialogue history and intonation.",
"cite_spans": [
{
"start": 132,
"end": 161,
"text": "(Fern\u00e1ndez et al., to appear)",
"ref_id": null
},
{
"start": 173,
"end": 187,
"text": "(Purver, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "This notation indicates the British National Corpus file, KSN, and the sentence numbers, 1776-1777.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A related task, namely that of automatically identifying NSUs and their antecedents, is investigated bySchlangen (2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The four-rule tree can be obtained by substituting the last node in the tree in figure 2 (section 5) for the one-rule tree in figure 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Arguably this ambiguity would not arise in French given that, according to(Beyssade and Marandin, 2005), in French the expressions used to acknowledge an assertion are different from those used in affirmative answers to polar questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank two anonymous SIGdial reviewers for their comments and suggestions. We would also like to thank Lief A. Nielsen and Matt Purver for discussion, and Zoran Macura and Yo Sato for help in assessing the NSU taxonomy. The research described here is funded by grant number RES-000-23-0065 from the Economic and Social Research Council of the United Kingdom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Contour Meaning and Dialogue Structure. Ms presented at the workshop Dialogue Modelling and Grammar",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Beyssade",
"suffix": ""
},
{
"first": "Jean-Marie",
"middle": [],
"last": "Marandin",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claire Beyssade and Jean-Marie Marandin. 2005. Contour Meaning and Dialogue Structure. Ms presented at the work- shop Dialogue Modelling and Grammar, Paris, France.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reference Guide for the British National Corpus (World Edition)",
"authors": [
{
"first": "L",
"middle": [],
"last": "Burnard",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Burnard. 2000. Reference Guide for the British National Corpus (World Edition). Oxford Universtity Computing Ser- vices. Accessible from: ftp://sable.ox.ac.uk/pub/ota/BNC/.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A simple, fast, and effective rule learner",
"authors": [
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of the 16th National Conference on AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Cohen and Y. Singer. 1999. A simple, fast, and effective rule learner. In Proc. of the 16th National Conference on AI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluation of machine learning methods for natural language processing tasks",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the third International Conference on Language Resources and Evaluation (LREC-02)",
"volume": "",
"issue": "",
"pages": "755--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans and V. Hoste. 2002. Evaluation of machine learning methods for natural language processing tasks. In In Proceedings of the third International Conference on Lan- guage Resources and Evaluation (LREC-02), pages 755- 760.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "TiMBL: Tilburg Memory Based Learner, Reference Guide",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Van Der Sloot",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2003. TiMBL: Tilburg Memory Based Learner, Ref- erence Guide. Technical Report ILK-0310, U. of Tilburg.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Nonsentential utterances: A corpus study. Traitement automatique des languages",
"authors": [
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Ginzburg",
"suffix": ""
}
],
"year": 2002,
"venue": "Dialogue",
"volume": "43",
"issue": "2",
"pages": "13--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raquel Fern\u00e1ndez and Jonathan Ginzburg. 2002. Non- sentential utterances: A corpus study. Traitement automa- tique des languages. Dialogue, 43(2):13-42.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classifying Ellipsis in Dialogue: A Machine Learning Approach",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "240--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fern\u00e1ndez, J. Ginzburg, and S. Lappin. 2004. Classify- ing Ellipsis in Dialogue: A Machine Learning Approach. In Proceedings of the 20th International Conference on Computational Linguistics, COLING 2004, pages 240-246, Geneva, Switzerland.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SHARDS: Fragment resolution in dialogue",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ginzburg",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lappin",
"suffix": ""
}
],
"year": null,
"venue": "Computing Meaning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Fern\u00e1ndez, J. Ginzburg, H. Gregory, and S. Lappin. (to appear). SHARDS: Fragment resolution in dialogue. In H. Bunt and R. Muskens, editors, Computing Meaning, vol- ume 3. Kluwer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Combining acoustic and pragmatic features to predict recognition performance in spoken dialogue systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gabsil",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Lemon",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Gabsil and O. Lemon. 2004. Combining acoustic and prag- matic features to predict recognition performance in spoken dialogue systems. In In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), Barcelona, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The claws word-tagging system",
"authors": [
{
"first": "R",
"middle": [],
"last": "Garside",
"suffix": ""
}
],
"year": 1987,
"venue": "The computational analysis of English: a corpus-based approach",
"volume": "",
"issue": "",
"pages": "30--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Garside. 1987. The claws word-tagging system. In Roger Garside, Geoffrey Leech, and Geoffrey Sampson, editors, The computational analysis of English: a corpus-based ap- proach, pages 30-41. Longman, Harlow.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum Entropy Modeling Toolkit for Python and C++",
"authors": [
{
"first": "Zhang",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang Le. 2003. Maximum Entropy Modeling Toolkit for Python and C++. http://homepages.inf.ed.ac.uk/ s0450736/maxent toolkit.php.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A comparision of algorithm for maximum entropy parameter estimation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Sixth Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Malouf. 2002. A comparision of algorithm for maximum entropy parameter estimation. In Proceedings of the Sixth Conference on Natural Language Learning, pages 49-55.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning to resolve bridging references",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Maroudas",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hitzeman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the ACL (ACL 2004)",
"volume": "",
"issue": "",
"pages": "144--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Poesio, R. Mehta, A. Maroudas, and J. Hitzeman. 2004. Learning to resolve bridging references. In Proceedings of the 42nd Annual Meeting of the ACL (ACL 2004), pages 144-151, Barcelona, Spain.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Theory and Use of Clarification in Dialogue",
"authors": [
{
"first": "M",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Purver. 2004. The Theory and Use of Clarification in Dia- logue. Ph.D. thesis, King's College, London, forthcoming.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Coherence-Based Approach to the Interpretation of Non-Sentential Utterances in Dialogue",
"authors": [
{
"first": "D",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Schlangen. 2003. A Coherence-Based Approach to the Interpretation of Non-Sentential Utterances in Dialogue. Ph.D. thesis, University of Edinburgh, Scotland.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Towards finding and fixing fragments: Using ML to identify non-sentential utterances and their antecedents in multi-party dialogue",
"authors": [
{
"first": "D",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the ACL (ACL 2005",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Schlangen. 2005. Towards finding and fixing fragments: Using ML to identify non-sentential utterances and their an- tecedents in multi-party dialogue. In Proceedings of the 43rd Annual Meeting of the ACL (ACL 2005), USA. Ann Arbor.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Data Mining: Practical machine learning tools with Java implementations",
"authors": [
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. H. Witten and E. Frank. 2000. Data Min- ing: Practical machine learning tools with Java im- plementations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Four-rule treeThis four-rule baseline yields a 62.33% weighted f-score. Detailed results for the three baselines considered are shown in",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>: NSU sub-corpus</td></tr><tr><td>was created by manual examination of a randomly</td></tr><tr><td>selected section of 200-speaker-turns from 54 BNC</td></tr><tr><td>files. The examined sub-corpus contains 14,315 sen-</td></tr><tr><td>tences.</td></tr></table>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>NSU class</td><td>recall prec</td><td>f1</td></tr><tr><td>ShortAns</td><td colspan=\"2\">95.30 30.10 45.80</td></tr><tr><td>AffAns</td><td colspan=\"2\">93.00 75.60 83.40</td></tr><tr><td>Reject</td><td colspan=\"2\">100.00 69.60 82.10</td></tr><tr><td>Weighted Score</td><td colspan=\"2\">45.93 26.73 32.50</td></tr></table>",
"text": "Majority class baseline",
"type_str": "table",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>NSU class</td><td>recall</td><td>prec</td><td>f1</td></tr><tr><td>CE</td><td>96.97</td><td>96.97</td><td>96.97</td></tr><tr><td>Sluice</td><td>100.00</td><td>95.24</td><td>97.56</td></tr><tr><td>ShortAns</td><td>94.34</td><td>47.39</td><td>63.09</td></tr><tr><td>AffAns</td><td>93.00</td><td>81.58</td><td>86.92</td></tr><tr><td>Reject</td><td>100.00</td><td>75.00</td><td>85.71</td></tr><tr><td>PropMod</td><td colspan=\"3\">100.00 100.00 100.00</td></tr><tr><td>FactMod</td><td colspan=\"3\">100.00 100.00 100.00</td></tr><tr><td>BareModPh</td><td>80.00</td><td>72.73</td><td>76.19</td></tr><tr><td>ConjFrag</td><td>100.00</td><td>71.43</td><td>83.33</td></tr><tr><td>Weighted Score</td><td>70.40</td><td>55.92</td><td>62.33</td></tr></table>",
"text": "One-rule baseline",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"content": "<table/>",
"text": "Four-rule baseline",
"type_str": "table",
"html": null
},
"TABREF7": {
"num": null,
"content": "<table><tr><td>System</td><td>w. f-score</td></tr><tr><td>Majority class baseline</td><td>6.67</td></tr><tr><td>One rule baseline</td><td>32.50</td></tr><tr><td>Four rule baseline</td><td>62.33</td></tr><tr><td>SLIPPER</td><td>86.35</td></tr><tr><td>TiMBL</td><td>86.66</td></tr><tr><td>MaxEnt</td><td>87.75</td></tr></table>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF8": {
"num": null,
"content": "<table/>",
"text": "",
"type_str": "table",
"html": null
},
"TABREF9": {
"num": null,
"content": "<table><tr><td/><td>shows a</td></tr><tr><td colspan=\"2\">comparison of all weighted f-scores obtained in this</td></tr><tr><td>second experiment.</td><td/></tr><tr><td>System</td><td>w. f-score</td></tr><tr><td>Majority class baseline</td><td>36.28</td></tr><tr><td>One rule baseline</td><td>54.26</td></tr><tr><td>Four rule baseline</td><td>68.38</td></tr><tr><td>SLIPPER</td><td>89.51</td></tr><tr><td>TiMBL</td><td>89.65</td></tr><tr><td>MaxEnt</td><td>89.88</td></tr></table>",
"text": "",
"type_str": "table",
"html": null
}
}
}
}