Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-2012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:14.650216Z"
},
"title": "DETECTION OF AGREEMENT vs. DISAGREEMENT IN MEETINGS: TRAINING WITH UNLABELED DATA",
"authors": [
{
"first": "Dustin",
"middle": [],
"last": "Hillard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"region": "EE"
}
},
"email": "[email protected]"
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"region": "EE"
}
},
"email": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {
"region": "EE"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.",
"pdf_parse": {
"paper_id": "N03-2012",
"_pdf_hash": "",
"abstract": [
{
"text": "To support summarization of automatically transcribed meetings, we introduce a classifier to recognize agreement or disagreement utterances, utilizing both word-based and prosodic cues. We show that hand-labeling efforts can be minimized by using unsupervised training on a large unlabeled data set combined with supervised training on a small amount of data. For ASR transcripts with over 45% WER, the system recovers nearly 80% of agree/disagree utterances with a confusion rate of only 3%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Meetings are an integral component of life in most organizations, and records of meetings are important for helping people recall (or learn for the first time) what took place in a meeting. Audio (or audio-visual) recordings of meetings offer a complete record of the interactions, but listening to the complete recording is impractical. To facilitate browsing and summarization of meeting recordings, it is useful to automatically annotate topic and participant interaction characteristics. Here, we focus on interactions, specifically identifying agreement and disagreement. These categories are particularly important for identifying decisions in meetings and inferring whether the decisions are controversial, which can be useful for automatic summarization. In addition, detecting agreement is important for associating action items with meeting participants and for understanding social dynamics. In this study, we focus on detection using both prosodic and language cues, contrasting results for handtranscribed and automatically transcribed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The agreement/disagreement labels can be thought of as a sort of speech act categorization. Automatic classification of speech acts has been the subject of several studies. Our work builds on (Shriberg et al., 1998) , which showed that prosodic features are useful for classifying speech acts and lead to increased accuracy when combined with word based cues. Other studies look at prediction of speech acts primarily from word-based cues, using language models or syntactic structure and discourse history (Chu-Carroll, 1998; Reithinger and Klesen, 1997) . Our work is informed by these studies, but departs significantly by exploring unsupervised training techniques.",
"cite_spans": [
{
"start": 192,
"end": 215,
"text": "(Shriberg et al., 1998)",
"ref_id": "BIBREF6"
},
{
"start": 507,
"end": 526,
"text": "(Chu-Carroll, 1998;",
"ref_id": "BIBREF3"
},
{
"start": 527,
"end": 555,
"text": "Reithinger and Klesen, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments are based on a subset of meeting recordings collected and transcribed by ICSI (Morgan et al., 2001) . Seven meetings were segmented (automatically, but with human adjustment) into 9854 total spurts. We define a 'spurt' as a period of speech by one speaker that has no pauses of greater than one half second (Shriberg et al., 2001) . Spurts are used here, rather than sentences, because our goal is to use ASR outputs and unsupervised training paradigms, where hand-labeled sentence segmentations are not available.",
"cite_spans": [
{
"start": 94,
"end": 115,
"text": "(Morgan et al., 2001)",
"ref_id": "BIBREF4"
},
{
"start": 323,
"end": 346,
"text": "(Shriberg et al., 2001)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "We define four categories: positive, backchannel, negative, and other. Frequent single-word spurts (specifically, yeah, right, yep, uh-huh, and ok) are separated out from the 'positive' category as backchannels because of the trivial nature of their detection and because they may reflect encouragement for the speaker to continue more than actual agreement. Examples include:",
"cite_spans": [
{
"start": 99,
"end": 147,
"text": "(specifically, yeah, right, yep, uh-huh, and ok)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Neg: (6%) \"This doesn't answer the question.\" Pos: (9%) \"Yeah, that sounds great.\" Back: (23%) \"Uh-huh.\" Other: (62%) \"Let's move on to the next topic.\" The first 450 spurts in each of four meetings were hand-labeled with these four categories based on listening to speech while viewing transcripts (so a sarcastic \"yeah, right\" is labeled as a disagreement despite the positive wording). Comparing tags on 250 spurts from two labelers produced a kappa coefficient (Siegel and Castellan, 1988) of .6, which is generally considered acceptable. Additionally, unlabeled spurts from six hand-transcribed training meetings are used in unsupervised training experiments, as described later. The total number of automatically labeled spurts (8094) is about five times the amount of hand-labeled data.",
"cite_spans": [
{
"start": 465,
"end": 493,
"text": "(Siegel and Castellan, 1988)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "For system development and as a control, we use handtranscripts in learning word-based cues and in training. We then evaluate the model with both hand-transcribed words and ASR output. The category labels from the hand transcriptions are mapped to the ASR transcripts, assigning an ASR spurt to a hand-labeled reference if more than half (time wise) of the ASR spurt overlaps the reference spurt. Feature Extraction. The features used in classification include heuristic word types and counts, word-based features derived from n-gram scores, and prosodic features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Simple word-based features include: the total number of words in a spurt, the number of \"positive\" and \"negative\" keywords, and the class (positive, negative, backchannel, discourse marker, other) of the first word based on the keywords. The keywords were chosen based on an \"effectiveness ratio,\" defined as the frequency of a word (or word pair) in the desired class divided by the frequency over all dissimilar classes combined. A minimum of five occurrences was required and then all instances with a ratio greater than .6 were selected as keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Other word-based features are found by computing the perplexity (average log probability) of the sequence of words in a spurt using a bigram language model (LM) for each of the four classes. The perplexity indicates the goodness of fit of a spurt to each class. We used both word and class LMs (with part-of-speech classes for all words except keywords). In addition, the word-based LM is used to score the first two words of the spurt, which often contain the most information about agreement and disagreement. The label of the most likely class for each type of LM is a categorical feature, and we also compute the posterior probability for each class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Prosodic features include pause, fundamental frequency (F0), and duration (Baron et al., 2002) . Features are derived for the first word alone and for the entire spurt. Average, maximum and initial pause duration features are used. The F0 average and maximum features are computed using different methods for normalizing F0 relative to a speaker-dependent baseline, mean and max. For duration, the average and maximum vowel duration from a forced alignment are used, both unnormalized and normalized for vowel identity and phone context. Spurt length in terms of number of words is also used.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Baron et al., 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Classifier design and feature selection. The overall approach to classifying spurts uses a decision tree classifier (Breiman et al., 1984) to combine the word based and prosodic cues. In order to facilitate learning of cues for the less frequent classes, the data was upsampled (duplicated) so that there were the same number of training points per class. The decision tree size was determined using error-based cost-complexity pruning with 4-fold cross validation. To reduce our initial candidate feature set, we used an iterative feature selection algorithm that involved running multiple decision trees (Shriberg et al., 2000) . The algorithm combines elements of brute-force search (in a leave-one-out paradigm) with previously de-termined heuristics for narrowing the search space. We used entropy reduction of the tree after cross-validation as a criterion for selecting the best subtree.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Breiman et al., 1984)",
"ref_id": "BIBREF2"
},
{
"start": 606,
"end": 629,
"text": "(Shriberg et al., 2000)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Unsupervised training. In order to train the models with as much data as possible, we used an unsupervised clustering strategy for incorporating unlabeled data. Four bigram models, one for each class, were initialized by dividing the hand transcribed training data into the four classes based upon keywords. First, all spurts which contain the negative keywords are assigned to the negative class. Backchannels are then pulled out when a spurt contains only one word and it falls in the backchannel word list. Next, spurts are selected as agreements if they contain positive keywords. Finally, the remaining spurts are associated with the \"other\" class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "The keyword separation gives an initial grouping; further regrouping involves unsupervised clustering using a maximum likelihood criterion. A preliminary language model is trained for each of the initial groups. Then, by evaluating each spurt in the corpus against each of the four language models, new groups are formed by associating spurts with the language model that produces the lowest perplexity. New language models are then trained for the reorganized groups and the process is iterated until there is no movement between groups. The final class assignments are used as \"truth\" for unsupervised training of language and prosodic models, as well as contributing features to decision trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "Hand-labeled data from one meeting is held out for test data, and the hand-labeled subset of three other meetings are used for training decision trees. Unlabeled spurts taken from six meetings, different from the test meeting, are used for unsupervised training. Performance is measured in terms of overall 3-way classification accuracy, merging the backchannel and agreement classes. The overall accuracy results can be compared to the \"chance\" rate of 50%, since testing is on 4-way upsampled data. In addition, we report the confusion rate between agreements and disagreements and their recovery (recall) rate, since these two classes are most important for our application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3"
},
{
"text": "Results are presented in Table 1 for models using only word-based cues. The simple keyword indicators used in a decision tree give the best performance on handtranscribed speech, but performance degrades dramatically on ASR output (with WER > 45%). For all other training conditions, the degradation in performance for the system based on ASR transcripts is not as large, though still significant. The system using unsupervised training clearly outperforms the system trained only on a small amount of hand-labeled data. Interestingly, when the keywords are used in combination with the language model, they do provide some benefit in the case where the system uses ASR transcripts. The results in Table 2 correspond to models using only prosodic cues. When these models are trained on only a small amount of hand-labeled data, the overall accuracy is similar to the system using keywords when operating on the ASR transcript. Performance is somewhat better than chance, and use of hand vs. ASR transcripts (and associated word alignments) has little impact. There is a small gain in accuracy but a large gain in agree/disagree recovery from using the data that was labeled via the unsupervised language model clustering technique. Unfortunately, when the prosody features are combined with the word-based features, there is no performance gain, even for the case of errorful ASR transcripts. ",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
},
{
"start": 698,
"end": 705,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3"
},
{
"text": "In summary, we have described an approach for automatic recognition of agreement and disagreement in meeting data, using both prosodic and word-based features. The methods can be implemented with a small amount of hand-labeled data by using unsupervised LM clustering to label additional data, which leads to significant gains in both word-based and prosody-based classifiers. The approach is extensible to other types of speech acts, and is especially important for domains in which very little annotated data exists. Even operating on ASR transcripts with high WERs (45%), we obtain a 78% rate of recovery of agreements and disagreements, with a very low rate of confusion between these classes. Prosodic features alone provide results almost as good as the wordbased models on ASR transcripts, but no additional benefit when used with word-based features. However, the good performance from prosody alone offers hope for performance gains given a richer set of speech acts with more lexically ambiguous cases (Bhagat et al., 2003) .",
"cite_spans": [
{
"start": 1012,
"end": 1033,
"text": "(Bhagat et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "This work is supported in part by the NSF under grants 0121396 and 0619921, DARPA grant N660019928924, and NASA grant NCC 2-1256. Any opinions, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of these agencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic punctuation and disfluency detection in multi-party meetings using prosodic and lexical cues",
"authors": [
{
"first": "D",
"middle": [],
"last": "Baron",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. ICSLP",
"volume": "",
"issue": "",
"pages": "949--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Baron et al. 2002. Automatic punctuation and disfluency detection in multi-party meetings using prosodic and lexical cues. In Proc. ICSLP, pages 949-952.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatically generated prosodic cues to lexically ambiguous dialog acts in multi-party meetings",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Carvey",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2003,
"venue": "ICPhS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bhagat, H. Carvey, and E. Shriberg. 2003. Automatically generated prosodic cues to lexically ambiguous dialog acts in multi-party meetings. In ICPhS.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Classification And Regression Trees",
"authors": [
{
"first": "L",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Breiman et al. 1984. Classification And Regression Trees. Wadsworth International Group, Belmont, CA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A statistical model for discourse act recognition in dialogue interactions",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
}
],
"year": 1998,
"venue": "Applying Machine Learning to Discourse Processing. Papers from the 1998 AAAI Spring Symposium",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chu-Carroll. 1998. A statistical model for discourse act recognition in dialogue interactions. In Applying Machine Learning to Discourse Processing. Papers from the 1998 AAAI Spring Symposium, pages 12-17.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The meeting project at ICSI",
"authors": [
{
"first": "N",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. Conf. on Human Language Technology",
"volume": "",
"issue": "",
"pages": "246--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Morgan et al. 2001. The meeting project at ICSI. In Proc. Conf. on Human Language Technology, pages 246- 252, March.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dialogue act classification using language models",
"authors": [
{
"first": "N",
"middle": [],
"last": "Reithinger",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Klesen",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "2235--2238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Reithinger and M. Klesen. 1997. Dialogue act classification using language models. In Proc. Eurospeech, pages 2235- 2238, September.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Can prosody aid the automatic classification of dialog acts in conversational speech?",
"authors": [
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 1998,
"venue": "Language and Speech",
"volume": "41",
"issue": "3-4",
"pages": "439--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Shriberg et al. 1998. Can prosody aid the automatic classi- fication of dialog acts in conversational speech? Language and Speech, 41(3-4), pages 439-487.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Prosody-based automatic segmentation of speech into sentences and topics",
"authors": [
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2000,
"venue": "Speech Communication",
"volume": "32",
"issue": "1-2",
"pages": "127--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Shriberg et al. 2000. Prosody-based automatic segmentation of speech into sentences and topics. Speech Communication, 32(1-2):127-154, September.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Observations on overlap: Findings and implications for automatic processing of multi-party conversation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. Eurospeech",
"volume": "",
"issue": "",
"pages": "1359--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Shriberg et al. 2001. Observations on overlap: Findings and implications for automatic processing of multi-party conver- sation. In Proc. Eurospeech, pages 1359-1362.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Nonparametric Statistics For the Behavioral Sciences",
"authors": [
{
"first": "S",
"middle": [],
"last": "Siegel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Castellan",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Siegel and J. Castellan. 1988. Nonparametric Statistics For the Behavioral Sciences. McGraw-Hill Inc., New York, NY, second edition edition.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "Results for classifiers using prosodic features.",
"content": "<table/>"
}
}
}
}