|
{ |
|
"paper_id": "2007", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:49:16.401005Z" |
|
}, |
|
"title": "A Multidimensional Approach to Utterance Segmentation and Dialogue Act Classif cation", |
|
"authors": [ |
|
{ |
|
"first": "Jeroen", |
|
"middle": [], |
|
"last": "Geertzen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tilburg University", |
|
"location": { |
|
"addrLine": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Volha", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tilburg University", |
|
"location": { |
|
"addrLine": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Tilburg University", |
|
"location": { |
|
"addrLine": "The Netherlands" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.", |
|
"pdf_parse": { |
|
"paper_id": "2007", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present a multidimensional approach to utterance segmentation and automatic dialogue act classif cation. We show that the use of multiple dimensions in distinguishing and annotating units not only supports a more accurate analysis of human communication, but can also help to solve some notorious problems concerning the segmentation of dialogue into functional units. We introduce the use of per-dimension segmentation for dialogue act taxonomies that feature multi-functionality and show that better classif cation results are obtained when using a separate segmentation for each dimension than when using one segmentation that f ts all dimensions. Three machine learning techniques are applied and compared on the task of automatic classifcation of multiple communicative functions of utterances. The results are encouraging and indicate that communicative functions in important dimensions are easy machinelearnable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Computer-based interpretation and generation of human dialogue is of growing relevance for today's information society. As natural language based dialogue is increasingly becoming an attractive and technically feasible human-machine interface, so the analysis of human-human interaction (for example in interviews or meetings) is becoming important for archival and retrieval purposes, as well as for knowledge management purposes and for the study of social interaction dynamics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Since people involved in communication constantly perceive, understand, evaluate, and react to each other's intentions as encoded in statements, questions, requests, offers, and so on, a natural approach to the analysis of human dialogue behaviour is to assign meaning to dialogue units in terms of dialogue acts. The identif cation and automatic recognition of the dialogue acts or communicative functions 1 of utterances is therefore an important task for dialogue analysis and the design of applications such as computer dialogue systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The assignment of appropriate meanings to 'dialogue units' presupposes a way to segment a dialogue into meaningful units. This turns out to be a complex task in itself. Many previous studies in the area of the automatic dialogue act assignment were typically carried out at the level of 'utterances' or that of 'turns'. A turn can be def ned as a stretch of communicative behaviour produced by one speaker, bounded by periods of inactivity of that speaker or by activity of another speaker (Allwood, 2000) . While turn boundaries can be recognised relatively easily, for some analysis segmentation into turns is often unsatisfactory because a turn may contain several smaller meaningful parts. Utterances, on the other hand, are linguistically def ned stretches of communicative behaviour that have one or multiple communicative functions. Utterances may coincide with turns but are usually smaller.", |
|
"cite_spans": [ |
|
{ |
|
"start": 490, |
|
"end": 505, |
|
"text": "(Allwood, 2000)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The detection of utterance boundaries is a highly nontrivial task. Syntactic features (e.g. part-ofspeech, verb frame boundaries of f nite verbs) and prosodic features (e.g. boundary tones, phrase f nal lengthening, silences, etc.) are often used as indicators of utterance endings (Shriberg et al., 1998; Stolcke et al., 2000; N\u00f6th et al., 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 282, |
|
"end": 305, |
|
"text": "(Shriberg et al., 1998;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 327, |
|
"text": "Stolcke et al., 2000;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 346, |
|
"text": "N\u00f6th et al., 2002)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One of the problems with dialogue segmentation into utterances is that utterances may be discontinuous. Spontaneous speech in dialogue usually includes f lled and unf lled pauses, self-corrections and restarts; for example, the speaker of the utterance in (1) corrects himself two times.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) About half ... about a quar-... th-...third of the way down", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Dialogue utterances may be interrupted by even more substantial segments than repairs and stallings. For example, the speaker of the utterance in (2) interrupts his Inform with a WH-Question:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I have some hills", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) Because twenty f ve Euros for a remote... how much is that locally in pounds? is too much money to buy an extra remote or a replacement remote Examples such as (1) and 2show that the segmentation of dialogue into utterances that have a communicative function requires these units to be potentially discontinuous. In some cases a dialogue act may be performed by an utterance formed by parts of more than one turn. This often happens in polylogues where participants may interrupt each other or talk simultaneously. For example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I have some hills", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) A: Well we can chat away for . .. um... for f ve minutes or so I think at... B: Mm-hmm ... ", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 94, |
|
"text": ".. um... for f ve minutes or so I think at... B: Mm-hmm ...", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I have some hills", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Another case of a dialogue act that is spread over multiple turns occurs when the speaker is providing complex information and divides it up into parts in order not to overload the addressee, as is shown in (4). The f rst part of the discontinuous segment that expresses S's answer also has a feedback function (making clear to U what S understood). The material in the three turns contributed by S together constitute the 'utterance' expressing S's answer to U 's question. Examples such as these show that the units in dialogue that carry communicative functions are often very different from the traditional linguistically def ned notion of an utterance. We therefore prefer to give these units a different name, that of functional segment, and we def ne these units as \"(possibly discontinuous) stretches of communicative behaviour that have one or more communicative functions\" (Bunt and Schiffrin, 2007) . In many cases a functional segment corresponds to an 'utterance' as def ned by certain linguistic properties, but in other cases it does not; and so the question arises how functional segments can be recognised. This is one of the main issues that this paper addresses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 883, |
|
"end": 909, |
|
"text": "(Bunt and Schiffrin, 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "at most", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When we want to segment a dialogue into functional segments, one complication is that of discontinuous segments, either within a turn or spread over several turns as we have already discussed. An even greater challenge is posed by those cases where different functional segments overlap, as in the example shown in 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "at most", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(5) U : What time is the f rst train to the airport on Sunday?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "at most", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "S: The f rst train to the airport on Sunday is at ...ehm...", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "at most", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The f rst part of S's turn repeats most of the preceding question, displaying what the system has heard, and as such has a feedback function. The turn as a whole minus the part ...ehm... has the communicative function of a WH-Answer, and that part has a stalling function. So the segments corresponding to the WH-Answer and the feedback function share the part \"The f rst train to the airport on Sunday\". This means that in this turn we have two functional segments starting at the same position but ending at different ones; in other words, no single segmentation of this turn exists that gives us all the relevant functional segments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.17.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To resolve this problem adequately, we propose not to maintain a single segmentation, but to use multiple segmentations in order to allow multiple functional segments that are associated to a specif c utterance to be identif ed more accurately. This approach is compatible with dialogue act taxonomies that address several aspects ('dimensions') of the interactive process simultaneously (e.g. DAMSL (Core and Allen, 1997) or DIT (Bunt, 2006) ), such as the task or activity that motivates the dialogue, the management of taking turns, or timing and attention. This multidimensional view of dialogue naturally leads to the suggestion of approaching dialogue segmentation in a similarly multidimensional way, and to allow the segmentation of a dialogue per dimension rather than in one f xed way. In the case of example (5), this means that S's turn is segmented in the three dimensions addressed by the functional segments in this turn:", |
|
"cite_spans": [ |
|
{ |
|
"start": 430, |
|
"end": 442, |
|
"text": "(Bunt, 2006)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.17.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Dimension Task/Activity: segment the turn as consisting of the discontinuous segment \"The f rst train to the airport on Sunday is at / 6.17\", which has a communicative function in this dimension, and the contiguous segment ...ehm..., which does not; \u2022 Dimension Feedback: segment the turn as consisting of the contiguous segment The f rst train to the airport on Sunday, which has a function in this dimension, and the contiguous segment is at ...ehm... 6.17, which does not; \u2022 Dimension Time Management: segment the turn as consisting of the contiguous segment ...ehm..., which has a communicative function in this dimension, and the discontinuous segment: The f rst train to the airport on Sunday is at 6.17, which does not. In recent work the benef ts of multidimensional approaches of dialogue act annotation have been discussed and it has been argued that such approaches allow a more accurate modelling of human dialogue behaviour (Petukhova and . In this paper we report the results of two studies: one on segmentation and one on classif cation of dialogue acts in multiple dimensions using various machine learning techniques. In Section 2 we will outline the two series of experiments describing the data, features, and algorithms that have been used. Section 3 and 4 report on the experimental results on segmentation and classif cation, respectively. Consequently, conclusions are drawn in Section 5.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "6.17.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The f rst study is motivated by the question of whether a different segmentation for each of the DIT dimensions (per-dimension segmentation) rather than a single segmentation for all dimensions will allow more accurate labelling of the communicative functions. In the second study we present the results of a series of experiments carried out in order to assess the automatic recognition and classif cation of communicative functions. For this purpose we apply machine-learning techniques. Such techniques have already successfully been used in the area of automatic dialogue processing 2 . Our approach is to train classif ers to learn communicative functions in multiple dimensions, taking functional segments as units.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Studies outline", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our experiments we used two data sets, namely, human-human dialogues in Dutch (the DIAMOND corpus (Geertzen et al., 2004) ) for both the segmentation study, and the classif cation study and humanhuman multi-party interactions in English (AMImeetings) 3 for the classif cation study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 124, |
|
"text": "(Geertzen et al., 2004)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The DIAMOND corpus contains human-machine and human-human Dutch dialogues that have an assistance-seeking nature. The dialogues were video-recorded in a setting where the subject could communicate with a help desk employee using an acoustic channel and ask for explanations on how to conf gure and operate a fax machine. The dialogues were orthographically transcribed and 952 utterances representing 1,408 functional segments from the human-human subset of the corpus have been selected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The AMI corpus contains manually produced orthographic transcriptions for each individual speaker, including word-level timings that have been derived using a speech recogniser in forced alignment mode. The meetings are video-recorded and each dialogue is also provided with sound f les (for our analysis we used recordings made with short range microphones to eliminate noise). Three scenario-based 4 meetings were selected to constitute a training set of 3,676 functional segment instances. For the DIAMOND training set, the order for the most frequently addressed dimensions is similar with Task dimension (45.6%), followed by Auto-Feedback (19.2%), and Turn Management (16.8%). For the AMI training set, the majority of the dialogue units address the Task dimension (33%), followed by Auto-Feedback (21.7%), Time Management (20.3%) and Turn Management (12.5%).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus data", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Both data sets were annotated with the DIT ++ tagset 5 . The DIT taxonomy distinguishes 11 dimensions, addressing information about: the domain or task (Task), feedback on communicative behaviour of the speaker (Auto-feedback) or other interlocutors (Allo-feedback), managing diff culties in the speaker's contributions (Own-Communication Management) or those of other interlocutors (Partner Communication Management), the speaker's need for time to continue the dialogue (Time Management), establishing and maintaining contact (Contact Management), about who should have the next turn (Turn Management), the way the speaker is planning to structure the dialogue (Dialogue Structuring), introducing, changing or closing the topic (Topic Management), and the information motivated by social conventions (Social Obligations Management).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagset", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For each dimension, at most one communicative function can be assigned, which can either occur in this dimension alone (the function is dimension specif c) or occur in all dimensions (the function is general purpose). For example, the utterance in 1 has a dimension-specif c function SELF CORREC-TION assigned to it that can only be assigned in the Own Communication Management dimension. Utterance A in example 3 has the communicative function of INFORM in the Dialogue Structuring dimension. Being a general purpose function, INFORM could possibly also be assigned to any other dimension (such as e.g. Task).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagset", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The tagset used in the studies contains 38 domainspecif c functions and 44 general purpose functions. For both data sets the annotation is f rst carried out on a single segmentation and then additionally on dialogue segmented in each of the dimensions separately.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tagset", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Every communicative function is required to have some ref ection in observable features of communicative behaviour, i.e. for every communicative function there are devices which a speaker can use in order to allow its successful recognition by the addressee such as linguistic cues, intonation properties, dialogue history, etc. State-of-the-art automatic dialogue understanding uses all available sources to interpret a spoken utterance. Features and their selection play a very important role in supporting accurate recognition and classif cation of functional segments and their computational modelling may be expected to contribute to improved automatic dia-logue processing. The features included in the data sets are those relating to dialogue history, prosody, and word occurrence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For the AMI meetings and the DIAMOND dialogues, history consists of the tags of the 10 and 4 previous turns, respectively 6 . Additionally, the tags of utterances to which the utterance in focus was a direct response to, as well as timing, are included as features. For the data which is segmented per dimension, some segments are located inside other segments. This occurs for instance with backchannels and interruptions that do not cause turn shifting; the occurrence of these events is encoded as a feature.", |
|
"cite_spans": [ |
|
{ |
|
"start": 122, |
|
"end": 123, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Prosodic features that are included are minimum, maximum, mean, and standard deviation of pitch (F0 in Hz), energy (RMS), voicing (fraction of locally unvoiced frames and number of voice breaks), and duration. Word occurrence is represented by a bag-of-words vector 7 indicating the presence or absence of words in the segment. In total, 1,668 features are used for AMI data and 947 for DIAMOND data. For AMI data we additionally indicated the speaker (A, B, C, D) and the addressee (other participants individually or the group as a whole).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A wide variety of machine-learning techniques has been used for NLP tasks with various instantiations of feature-sets and target class encodings, and for dialogue processing, it is still an open issue which techniques are the most suitable for which task. We used three different types of classif ers to test their performance on our dialogue data: a probabilistic one, a rule inducer and memory-based learner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "For a probabilistic classif er we used Naive Bayes. This classif er assumes class-conditional independence, which does not always respect the characteristics of the features used. However, Naive Bayes classif ers often work quite well for complex realworld situations and are particularly suitable for situations in which the dimensionality of the input is high. Moreover, this classif er requires relatively lit- 6 We use more preceding tags for the AMI data than for the DIAMOND data since there is often more distance between related utterances in multi-party interaction than in dialogue.", |
|
"cite_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 415, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "7 With a size of 1,640 entries for AMI data and 923 for DIA-MOND data. tle computation and can be eff ciently trained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "For rule induction algorithm, we chose Ripper (Cohen, 1995) . The advantage of such an algorithm is that the regularities discovered in the data are represented as human-readable rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 59, |
|
"text": "(Cohen, 1995)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The third classif er is IB1, which is a memorybased learner that is a successor of the k-nearest neighbour (k-NN) classif er. The algorithm f rst stores a representation of all training examples in memory. When classifying new instances, it searches for the k most similar examples (nearest neighbours) in memory according to a similarity metric, and extrapolates the target class from this set to the new instances. The algorithm may yield more precise results given suff cient training data, because it does not abstract away low-frequent phenomena during the learning (Daelemans et al., 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 571, |
|
"end": 595, |
|
"text": "(Daelemans et al., 1999)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The results of all experiments were obtained using 10-fold cross-validation 8 . When setting a baseline it is common practice to predict the majority class tag, but for our data sets such a baseline is not very useful because of the relative low frequencies of the tags in most dimensions. Instead, we use a baseline that is based on a single feature, namely, the tag of the previous dialogue utterance (see (Lendvai et al., 2003) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 430, |
|
"text": "(Lendvai et al., 2003)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classif ers", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Any segmentation of dialogue (or multi-party interaction) into meaningful units, such as functional segments, is motivated by the meaning that is conveyed. As a result, the segmentation strongly depends on the def nition of the dialogue acts in the taxonomy that is used. The multidimensional tagset used in this paper allows several aspects of communicative behaviour for a single functional segment to be addressed. However, the functions of a segment do not necessarily address the same span in the communicative channels. Hence it could be argued that separate segmentation for each dimension should al-low for a more accurate identif cation of spans associated to specif c communicative functions. When we assume that this is the case, it would follow that classif cation of communicative functions based on per-dimension segments should be more successful than classif cation based on a single segmentation for all dimensions. For testing the above-mentioned hypothesis, Ripper -the classif er that provides the highest accuracy scores in our experiments-was used on the DIAMOND dialogues annotated with the DIT ++ tagset. Two classif cation tasks on exactly the same dialogues with exactly the same kind of features and annotated communicative functions were performed. The only difference being that in one task one segmentation that f ts all dimensions (OSFAD) was used, whereas in the other task per-dimension segmentation (PDS) was used. Because DIT allows the assignment of at most one function in a specif c dimension, a segment in the PDS task has one tag whereas a segment in the OSFAD setting might have a combination of tags 9 . Running Ripper (with default parameters) for both tasks resulted in the scores presented in From the results in Table 2 we can observe that for most important dimensions, PDS results in better classif cation performance: the functions related to the dimensions Task, Auto Feedback, and Time Management show signif cant improvement. For some dimensions, classif cation does not take advantage of PDS, mainly because of two reasons: in the dataset some dimensions are rarely addressed (e.g. Partner Communication Management) and some dimensions are addressed without any other dimension being addressed around the same time (e.g. Contact Management). These observations are motivated by the kinds and characteristics of interaction and in some extend by the limited size of the dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1758, |
|
"end": 1765, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multidimensional dialogue act segmentation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Although not all dimensions benef t signif cantly, it is clear that multidimensional segmentation helps to classify communicative functions more accurately. However, it should be noted that the gain of more accurately identif ed functions comes at the cost of a slightly more complex segmentation procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multidimensional dialogue act segmentation", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Since a segment is often multi-functional, it is not only interesting to identify the dimension, the communicative function, and the tag separately, but also to test whether or not and to what extent it is possible to learn the combination of tags (e.g. T imeM, ST ALLIN G , T urnM, KEEP ). We carried out a set of experiments studying the performance of the three classif ers described in Section 2 on the following classif cation tasks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Classif cation in Multiple Dimensions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 each addressed dimension separately or multiple addressed dimensions in combination, e.g. a single dimension like Task, Auto-Feedback, Turn Management, or a combination like Turn Management and Time Management;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Classif cation in Multiple Dimensions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 communicative function per dimension in isolation, e.g. INFORM, CORRECTION, WH-QUESTION, etc. in the Auto-Feedback dimension;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Classif cation in Multiple Dimensions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 tag or combination of tags, e.g. Table 3 gives an overview of classif cation scores expressed as the percentage of correctly predicted classes in all training experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 42, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Classif cation in Multiple Dimensions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "either D, GP or D, DS , or D, GP , D, DS or D, DS , D, DS .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Act Classif cation in Multiple Dimensions", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For the prediction of a dimension addressed by a functional segment (upper data row in the The scores are the same (e.g. with turn initial functions) or higher then those of the baseline. Some of the dimensions distinguished in DIT are not included in Table 3 since the segments which were tagged as having communicative functions in the dimensions Allo-feedback, Contact management, Topic management, Dialogue Structuring, Partner Communication management, and Social Obligation Management are rare in the AMI training data. The instances from these dimensions were almost perfectly classif ed by all classif ers, reaching an accuracy higher than 99%, but not better than those of the baseline. In Appendix A of this paper we present a selection of the RIPPER induced rules illustrated with examples from the corpus. As was to be expected, for the prediction of the Task dimension, the bagof-words feature representing word occurrence in the segment was important. For example, the presence of 'because' in a segment was a good indicator for identifying INFORM JUSTIFY; the occurrence of 'like', or 'for example', or 'maybe' and 'might' for SUGGESTION. Also the duration of the segment was usually longer than for example segments which addressed the Time or Turn Management dimensions. For the prediction of questions, word occurrence (e.g. occurrence of wh-words in WH-Questions, and 'or' for Alternative Questions) and prosodic features like standard deviation in pitch were essential. For the segments which are identi-f ed as having Information-Providing functions, important features were detected in the dialogue history, e.g. CONFIRM about the task was a response to a previous CHECK question about the task. The segments addressing the Auto-Feedback dimension were classif ed successfully on the basis of their word occurrence and dialogue history. The occurrence of words like 'alright', 'right', 'okay', 'uhhuh' are important clues for their recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 259, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As for the dimensions Turn and Time Management, the duration of the segment was a key feature, because the duration of these segments tends to be shorter than that of others. Moreover, these utterances were pronounced more softly (e.g. <49dB) and are less voiced (e.g. about 47% of unvoiced frames). They usually occur inside 'larger' segments, mostly in the beginning or in the middle. If they appear in clause-initial position, they usually have turn initial functions (TAKE, ACCEPT, GRAB) and the function STALLING in the Time Management dimension; if they occur in the middle of the 'main' segment they are used to signal that the speaker has some diff culties in completing his/her utterance, needs some time and wants to keep the turn (see examples 3 and 5). Of course, usage of words like 'um', 'well', but also lengthening the words indicates the speaker's hesitation and/or diffculties in utterance completion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Segments having communicative functions in the dimension Dialogue Structuring often have linguistic cues like 'meeting', 'f nish', 'wrap up', etc. Important cues for RETRACTs (in the dimension Own Communication Management) are their relation to what is actually retracted ('reply to' feature), and the energy with which they are spoken (i.e. they are pronounced louder than the retracted 'reparandum', i.e. >55dB).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Looking further at the results we can observe that tag labels were diff cult to classify (see bottom data row of the table). They eventually reach an accuracy of 50.2% (baseline: 25.7%). These scores should be evaluated in the light of the relatively high degree of granularity of these tags (97 unique tags and 132 unique combinations of tags) and relatively lower frequency of each of those in the training sets. We have however reason to expect that by increasing the size of the training set higher accuracy could be reached.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental results", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this paper a multidimensional approach to utterance segmentation and automatic dialogue act classif cation has been presented in which some problematic issues with the segmentation of dialogue into functional units are addressed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Whereas it is common practice to assign dialogue acts to a single segmentation, we conclude that for dialogue act taxonomies that allow assignment of multiple functions to dialogue units we can describe human communication more accurately by using per-dimension segmentation instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We have shown that machine learning techniques can be prof tably used on a complex task such as the automatic recognition of multiple communicative functions of dialogue segments. All three classif ers that have been tested performed well on all classif cation tasks. For the majority of tasks, the scores we obtained are signif cantly higher than those of the baseline. However, the datasets that we used were not very rich with respect to all the communicative functions distinguished in the various dimensions: some classes were underrepresented.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For future work, we intend to extended the studies into two directions. First, we plan to increase the size of our dataset to obtain a suff cient number of instances for each class by manually segmenting and annotating more dialogue data with both segmentations. This would allow us to get a fair indication of the classif cation performance of general purpose functions in dimensions other than Task and Feedback. Furthermore, we plan to consider multi-party interactions (the AMI sessions for instance) and use other modalities besides speech audio in comparing both segmentations. We expect that for such data, dialogue act classif cation may benef t more from using per-dimension segmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we use the terms 'dialogue act' and 'communicative function' synonymously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "See e.g.(Clark, 2003) for an overview.3 \u0100ugmented M ulti-party \u012anteraction (http://www. amiproject.org/).4 Meeting participants play different roles in a f ctitious design team that takes a new project from kick-off to completion over the course of a day.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For more information about the tagset and the dimensions that are identif ed, please visit: http://dit.uvt.nl/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to reduce the effect of imbalances in the data, it is partitioned ten times. Each time a different 10% of the data is used as test set and the remaining 90% as training set. The procedure is repeated ten times so that in the end, every instance has been used exactly once for testing(Witten and Frank, 2000) and the scores are averaged. The cross-validation was stratif ed, i.e. the 10 folds contained approximately the same proportions of instances with relevant tags as in the entire dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our data, at most four functions occurred simultaneously.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The structure of a rule is: if (feature = x) and (feature= x, etc.) =\u21d2 class (n/m), where x is a nominal feature value, an element of a set feature, or a range of a numeric feature; n indicates the number of instances a rule covers and m the number of false predictions. We illustrate the induced rules with some interesting examples from the training set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Appendix A: Selected RIPPER rules illustrated with corpus examples", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(it = p) and (wouldnt = p) =\u21d2 da=task:check (5.0/1.0) (right = p) and (max.pitch <= 203.87) =\u21d2 da=task:check (8.0/2.0) Example: (1052:88-1057:12) D: We were given sort of an example of a coffee machine or something, right? (dimension: Task, GP:CHECK; FT: task:check) (reply to = task;ynq) =\u21d2 da=task:yna (60.0/22.0) (reply to = task;ynq;t give) =\u21d2 da=task:yna (2.0/0.0) (reply to = task;ynq;t grab) =\u21d2 da=task:yna (2.0/0.0) (reply to = task;ynq;t release) =\u21d2 da=task:yna (3.0/1.0) Example: (1407:56-1413:72) B: Do you think maybe we need like further advances in that kind of area until it's worthwhile incorporating it though (dimension:Task; GP: YN-QUESTION; FT: task:ynq) (1412:96-1415:6) C: I , think , it'd , probably , quite , expensive , to , put , in (dimension:Task; GP: YN-ANSWER; FT: task:yna) (yeah = p) and (dss reply <= -3.920044) and (duration >= 0.56) and (min.pitch >= 95.007) =\u21d2 da=task:inf.agree (27.0/8.0) (yeah = p) and (fraction:voiced/unvoiced >= 0.36634) and (dss reply \u00a1= -0.52002) and (fraction:voiced/unvoiced <= 0.46875) =\u21d2 da=task:inf.agree (8.0/1.0) (yeah = p) and (energy >= 56.862651) and (mean.pitch <= 144.971) =\u21d2 da=task:inf.agree (9.0/2.0) (dss reply <= -0.359985) and (sure = p) and (max.pitch <= 187.065) =\u21d2 da=task:inf.agree (8.0/0.0) (yeah = p) and (U3 = turn:t keep;time:stal) =\u21d2 da=task:inf.agree (14.0/6.0) Example: (1277:88-1286:28) D: but people who are about forty-ish and above now would not be so dependent and reliant on a computer or mobile phone (dimension:Task; GP:INFORM; FT:task;inf ) (1284:32-1286:16) D: Yeah, sure (dimension: Task; GP:INFORM AGREEMENT; FT: task:inf.agree)inf.just (26.0/9.0) (dss reply <= -1.52002) and (voice breaks >= 4) and (energy >= 54.435098) and (mean.pitch <= 173.572) =\u21d2 da=task:inf.ela (51.0/21.0) Example: (1396:84-1403:76) C: One problem with speech recognition is the technology that was in that one wasn't particularly amazing (dimension: Task; GP: INFORM WARNING; FT: task:inf.warn) (maybe = p) and (dss reply >= 0) =\u21d2 da=task:suggest (38.0/11.0) (duration >= 2.12) and (reply to = ) and (might = p) =\u21d2 da=task:suggest (12.0/4.0) Example: (1694:6-1703:48) B: It might be a good idea just to restrict our creative inf uence on this and not worry so much about how we transmit it (dimension:Task; GP: SUGGESTION; FT:task;suggest) (1704:4-1708:44) B: because I mean it tried and tested intra-red (dimension:Task; GP: INFORM JUSTIFY; FT:task:inf.just) Auto-Feedback: (dss reply <= -0.039978) and (break <= 1) =\u21d2 da=au f:au f p ex (168.0/24.0) (dss reply <= -0.039917) and (duration <= 1.08) and (okay = p) =\u21d2 da=au f:au f p ex (84.0/8.0) (dss reply <= -0.039978) and (break <= 1) and (mmhmm = p) =\u21d2 da=au f:au f p ex (34.0/1.0) (dss reply <= -0.039978) and (break <= 3) and (voclaugh = p) =\u21d2 da=au f:au f p ex (25.0/2.0) (okay = p) and (energy <= 56.617891) and (duration >= (um = p) and (dss reply <= -1.199997) =\u21d2 da=turn:t acc;t keep;time:stal (13.0/6.0) (well = p) and (dss within <= -0.159912) and (duration <= 0.72) =\u21d2 da=turn:t grab;t keep (9.0/3.0) (um = p) and (dse within >= 0.040039) and (dse within <= 1.040039) and (min.pitch >= 107.875) =\u21d2 da=turn:t grab;t keep;time:stal (18.0/4.0) (well = p) and (dss within <= -1.119995) =\u21d2 da=turn:t grab;t keep;time:stal (6.0/2.0) (um = p) and (dse within <= 0) and (energy <= 49.86226) and (mean.pitch >= 114.669) =\u21d2 da=turn:t take;t keep;time:stal ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Management:", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "An activity-based approach to pragmatics", |
|
"authors": [ |
|
{ |
|
"first": "Jens", |
|
"middle": [ |
|
"Allwood" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Studies in Computational Pragmatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "47--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jens Allwood. 2000. An activity-based approach to pragmat- ics. In Harry Bunt and William Black, editors, Abduction, Belief and Context in Dialogue; Studies in Computational Pragmatics, pages 47-80. John Benjamins, Amsterdam, The Netherlands.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Def ning interoperable concepts for dialogue act annotation", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanda", |
|
"middle": [], |
|
"last": "Schiffrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Seventh International Workshop on Computational Semantics (IWCS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--27", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt and Amanda Schiffrin. 2007. Def ning interopera- ble concepts for dialogue act annotation. In Proceedings of the Seventh International Workshop on Computational Se- mantics (IWCS), pages 16-27.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Dimensions in dialogue annotation", |
|
"authors": [ |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harry Bunt. 2006. Dimensions in dialogue annotation. In Pro- ceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Machine learning approaches to shallow discourse parsing: A literature review", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "IM2.MDM Project Deliverable", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Clark. 2003. Machine learning approaches to shal- low discourse parsing: A literature review. IM2.MDM Project Deliverable, March.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Fast effective rule induction", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of the 12th International Conference on Machine Learning (ICML'95)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "115--123", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William W. Cohen. 1995. Fast effective rule induction. In Pro- ceedings of the 12th International Conference on Machine Learning (ICML'95), pages 115-123.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Coding dialogues with the DAMSL annotation scheme", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Core", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "28--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark G. Core and James F. Allen. 1997. Coding dialogues with the DAMSL annotation scheme. In Working Notes: AAAI Fall Symposium on Communicative Action in Humans and Machines, pages 28-35.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Forgetting exceptions is harmful in language learning", |
|
"authors": [ |
|
{ |
|
"first": "Walter", |
|
"middle": [], |
|
"last": "Daelemans", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Machine Learning", |
|
"volume": "34", |
|
"issue": "", |
|
"pages": "11--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Walter Daelemans, Antal van den Bosch, and Jakub Zavrel. 1999. Forgetting exceptions is harmful in language learn- ing. Machine Learning, 34(1/3):11-43.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The diamond project. Poster at the 8th Workshop on the Semantics and Pragmatics of Dialogue", |
|
"authors": [ |
|
{ |
|
"first": "Jeroen", |
|
"middle": [], |
|
"last": "Geertzen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Girard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roser", |
|
"middle": [], |
|
"last": "Morante", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeroen Geertzen, Yann Girard, and Roser Morante. 2004. The diamond project. Poster at the 8th Workshop on the Seman- tics and Pragmatics of Dialogue (CATALOG 2004)", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Machine learning for shallow interpretation of user utterances in spoken dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Piroska", |
|
"middle": [], |
|
"last": "Lendvai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of EACL-03 Workshop on Dialogue Systems: interaction, adaptation and styles of management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "69--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piroska Lendvai, Antal van den Bosch, and Emiel Krahmer. 2003. Machine learning for shallow interpretation of user utterances in spoken dialogue systems. In Proceedings of EACL-03 Workshop on Dialogue Systems: interaction, adaptation and styles of management, pages 69-78.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "On the use of prosody in automatic dialogue understanding", |
|
"authors": [ |
|
{ |
|
"first": "Elmar", |
|
"middle": [], |
|
"last": "N\u00f6th", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anton", |
|
"middle": [], |
|
"last": "Batliner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johannes-Peter", |
|
"middle": [], |
|
"last": "Volker Warnke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manuela", |
|
"middle": [], |
|
"last": "Haas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Boros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Buckow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Huber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthias", |
|
"middle": [], |
|
"last": "Gallwitz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heinrich", |
|
"middle": [], |
|
"last": "Nutt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Niemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Speech Communication", |
|
"volume": "36", |
|
"issue": "1-2", |
|
"pages": "45--62", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elmar N\u00f6th, Anton Batliner, Volker Warnke, Johannes- Peter Haas, Manuela Boros, Jan Buckow, Richard Huber, Florian Gallwitz, Matthias Nutt, and Heinrich Niemann. 2002. On the use of prosody in automatic dialogue under- standing. Speech Communication, 36(1-2):45-62.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A multidimensional approach to multimodal dialogue act annotation", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Volha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harry", |
|
"middle": [], |
|
"last": "Petukhova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bunt", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Seventh International Workshop on Computational Semantics (IWCS)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--153", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Volha V. Petukhova and Harry Bunt. 2007. A multidimen- sional approach to multimodal dialogue act annotation. In Proceedings of the Seventh International Workshop on Com- putational Semantics (IWCS), pages 142-153.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Can prosody aid the automatic classif cation of dialog acts in conversational speech?", |
|
"authors": [ |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Ries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Coccaro", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Language and Speech (Special Issue on Prosody and Conversation)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "439--487", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elizabeth Shriberg, Rebecca Bates, Andreas Stolcke, Paul Tay- lor, Daniel Jurafsky, Klaus Ries, Noah Coccaro, Rachel Mar- tin, Marie Meteer, and Carol Van Ess-Dykema. 1998. Can prosody aid the automatic classif cation of dialog acts in con- versational speech? Language and Speech (Special Issue on Prosody and Conversation), 41(3-4):439-487.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech", |
|
"authors": [ |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Stolcke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Ries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Coccaro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elizabeth", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Bates", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rachel", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carol", |
|
"middle": [], |
|
"last": "Van Ess-Dykema", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie", |
|
"middle": [], |
|
"last": "Meteer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computational Linguistics", |
|
"volume": "26", |
|
"issue": "3", |
|
"pages": "339--373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Lin- guistics, 26(3):339-373.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Data mining: practical machine learning tools and techniques with Java implementations", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Witten", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian H. Witten and Eibe Frank. 2000. Data mining: practical machine learning tools and techniques with Java implemen- tations. Morgan Kaufmann Publishers, San Francisco:CA, USA.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "(4) U : Could you tell me what time there are f ights to Kuala Lumpur on Monday? S: There are two early KLM f ights, at 7.30 and at 8:25,.. U : Yes,... S: ... and a midday f ight by Garoeda at 12.10,... U : Yes,... S: And there's late afternoon f ight by Malaysian Airways at 17.55.", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "As a result of difference in function type, a tag consists either of a pair of the addressed dimension (D) and general purpose function (GP ) or the addressed dimension and dimension specif c function (DS). Some functional segments can address several dimensions simultaneously. For example, utterances like uhm.., ehm.. have the communicative function of STALLING in the dimension Time Management, but also have the TURN KEEPING function in the Turn Management dimension. These utterances typically have two D, DS tags assigned: T imeM, ST ALLIN G and T urnM, KEEP IN G .", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"content": "<table><tr><td>AMI data</td><td/><td colspan=\"2\">DIAMOND data</td></tr><tr><td>Tag</td><td>Perc.</td><td>Tag</td><td>Perc.</td></tr><tr><td>Time;STALLING</td><td>20.7</td><td>Task;INSTRUCT</td><td>14.8</td></tr><tr><td>Auto-FB;POS.OVERALL</td><td>18.7</td><td>Task;INFORM</td><td>7.7</td></tr><tr><td>Turn;Turn Keeping</td><td>7.5</td><td>Time;stall</td><td>6.5</td></tr><tr><td>Task;INFORM</td><td>6.8</td><td>Task;INFORM elaborate</td><td>6.3</td></tr><tr><td>Task;INFORM Elaborate</td><td>3.5</td><td>Auto-FB;POS.OVERALL</td><td>6.2</td></tr><tr><td>Task;INF.Agreement</td><td>2.5</td><td>Task;WH-Question</td><td>4.5</td></tr><tr><td>Task;YN-Question</td><td>2.3</td><td>Auto-FB;POS.INT</td><td>3.1</td></tr><tr><td>Task;SUGGEST</td><td>2.0</td><td>Task;YN-Question</td><td>2.9</td></tr><tr><td>Task;INFORM Justify</td><td>2.0</td><td>Task;CHECK</td><td>2.6</td></tr><tr><td>Task;CHECK</td><td>1.6</td><td>Task:INFORM Clarify</td><td>2.1</td></tr></table>", |
|
"num": null, |
|
"text": "gives percentages of occurrence of the ten most frequently observed tags in both training sets.", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Percentage of instances for most frequent tags in the AMI and DIAMOND training sets.", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td>Dimension</td><td colspan=\"2\">OSFAD PDS</td><td/></tr><tr><td>Task</td><td>66.1</td><td>72.8</td><td>*</td></tr><tr><td>Auto Feedback</td><td>80.4</td><td>86.3</td><td>*</td></tr><tr><td>Allo Feedback</td><td>98.4</td><td>99.6</td><td/></tr><tr><td>Turn M.</td><td>88.3</td><td>90.0</td><td/></tr><tr><td>Time M.</td><td>72.6</td><td>82.1</td><td>*</td></tr><tr><td>Contact M.</td><td>97.3</td><td>97.3</td><td/></tr><tr><td>Topic M.</td><td>55.2</td><td>55.2</td><td/></tr><tr><td>Own Communication M.</td><td>85.9</td><td>87.1</td><td/></tr><tr><td>Partner Communication M.</td><td>64.5</td><td>64.5</td><td/></tr><tr><td>Dialogue Structuring</td><td>74.3</td><td>74.3</td><td/></tr><tr><td>Social Obligations M.</td><td>93.2</td><td>93.3</td><td/></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table><tr><td>one segmentation that f ts all dimensions (OSFAD) and per-</td></tr><tr><td>dimension segmentation (PDS).</td></tr><tr><td/></tr></table>", |
|
"num": null, |
|
"text": "Accuracy scores for communicative functions with", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"content": "<table><tr><td>Classif cation task</td><td>BL</td><td colspan=\"3\">NBayes Ripper IB1</td></tr><tr><td>Dimension tag</td><td colspan=\"2\">38.0 69.5</td><td>72.8</td><td>50.4</td></tr><tr><td>Task management</td><td colspan=\"2\">66.8 71.2</td><td>72.3</td><td>53.6</td></tr><tr><td>Auto-Feedback</td><td colspan=\"2\">77.9 86.0</td><td>89.7</td><td>85.9</td></tr><tr><td>Turn initial</td><td colspan=\"2\">93.2 92.9</td><td>93.2</td><td>88.0</td></tr><tr><td>Turn closing</td><td colspan=\"2\">58.9 85.1</td><td>91.1</td><td>69.6</td></tr><tr><td>Time management</td><td colspan=\"2\">69.7 99.2</td><td>99.4</td><td>99.5</td></tr><tr><td>OCM</td><td colspan=\"2\">89.6 90.0</td><td>94.1</td><td>85.6</td></tr><tr><td>Functional tag</td><td colspan=\"2\">25.7 48.0</td><td>50.2</td><td>38.9</td></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table><tr><td>: Overview of accuracy on the baseline (BL) and the</td></tr><tr><td>classif ers on all classif cation tasks</td></tr><tr><td>all algorithms outperform the baseline by a broad</td></tr><tr><td>margin. Ripper clearly outperforms the other two</td></tr><tr><td>learners. The middle part of the table gives an</td></tr><tr><td>overview of the performance of the tested classif ers</td></tr><tr><td>on communicative functions per dimension. Rip-</td></tr><tr><td>per again outperforms Naive Bayes and IB1.</td></tr></table>", |
|
"num": null, |
|
"text": "", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |