Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-2007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:00.198606Z"
},
"title": "Active Learning for Classifying Phone Sequences from Unsupervised Phonotactic Models",
"authors": [
{
"first": "Shona",
"middle": [],
"last": "Douglas",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs -Research Florham Park",
"institution": "",
"location": {
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes an application of active learning methods to the classification of phone strings recognized using unsupervised phonotactic models. The only training data required for classification using these recognition methods is assigning class labels to the audio files. The work described here demonstrates that substantial savings in this effort can be obtained by actively selecting examples to be labeled using confidence scores from the Boos-Texter classifier. The saving in class labeling effort is evaluated on two different spoken language system domains in terms both of the number of utterances to be labeled and the length of the labeled utterances in phones. We show that savings in labeling effort of around 30% can be obtained using active selection of examples.",
"pdf_parse": {
"paper_id": "N03-2007",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes an application of active learning methods to the classification of phone strings recognized using unsupervised phonotactic models. The only training data required for classification using these recognition methods is assigning class labels to the audio files. The work described here demonstrates that substantial savings in this effort can be obtained by actively selecting examples to be labeled using confidence scores from the Boos-Texter classifier. The saving in class labeling effort is evaluated on two different spoken language system domains in terms both of the number of utterances to be labeled and the length of the labeled utterances in phones. We show that savings in labeling effort of around 30% can be obtained using active selection of examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A major barrier to the rapid and cost-effective development of spoken language processing applications is the need for time-consuming and expensive human transcription and annotation of collected data. Extensive transcription of audio is generally undertaken to provide wordlevel labeling to train recognition models. Applications that use statistically trained classification as a component of an understanding system also require this transcribed text to train on, plus an assignment of class labels to each utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent work by Alshawi (2003) reported in this conference, new methods for unsupervised training of phone string recognizers have been developed, removing the need for word-level transcription. The phone-string output of such recognizers has been used in classification tasks using the BoosTexter text classification algorithm, giving utterance classfication accuracy that is surprisingly close to that obtained using conventionally trained word trigram models requiring transcription. The only training data required for classification using these recognition methods is assigning class labels to the audio files. The aim of the work described in this paper is to amplify this advantage by reducing the amount of effort required to train classifiers for phone-based systems by actively selecting which utterances to assign class labels. Active learning has been applied to classification problems before (McCallum and Nigam, 1998; Tur et al., 2003) , but not to classifiying phone strings.",
"cite_spans": [
{
"start": 18,
"end": 32,
"text": "Alshawi (2003)",
"ref_id": "BIBREF0"
},
{
"start": 908,
"end": 934,
"text": "(McCallum and Nigam, 1998;",
"ref_id": "BIBREF1"
},
{
"start": 935,
"end": 952,
"text": "Tur et al., 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised recognition of phone sequences is carried out according to the method described by Alshawi (2003) . In this method, the training inputs to recognition model training are simply the set of audio files that have been recorded from the application.",
"cite_spans": [
{
"start": 96,
"end": 110,
"text": "Alshawi (2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Phone Recognition",
"sec_num": "2"
},
{
"text": "The recognition training phase is an iterative procedure in which a phone n-gram model is refined successively: The phone strings resulting from the current pass over the speech files are used to construct the phone n-gram model for the next iteration. We currently only re-estimate the ngram model, so the same general-purpose HMM acoustic model is used for ASR decoding in all iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Phone Recognition",
"sec_num": "2"
},
{
"text": "Recognition training can be briefly described as follows. First, set the phone sequence model to an initial phone string model. This initial model used can be an unweighted phone loop or a general purpose phonotactic model for the language being recognized. Then, for successively larger n-grams, produce the output set of phone sequences from recognizing the training speech files with the current phone sequence model, and train the next larger n-gram phone sequence model on this output corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Phone Recognition",
"sec_num": "2"
},
{
"text": "The method we use for training the phone sequence classifier is as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training phone sequence classifiers with active selection of examples",
"sec_num": "3"
},
{
"text": "1. Choose an initial subset S of training recordings at random; assign class label(s) to each example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training phone sequence classifiers with active selection of examples",
"sec_num": "3"
},
{
"text": "2. Recognize these recordings using the phone recognizer described in section 2. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training phone sequence classifiers with active selection of examples",
"sec_num": "3"
},
{
"text": "The datasets tested on and the classifier used are the same as those in the experiments on phone sequence classification reported by Alshawi (2003) . The details are briefly restated here.",
"cite_spans": [
{
"start": 133,
"end": 147,
"text": "Alshawi (2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Two collections of utterances from two domains were used in the experiments:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "1. Customer care utterances (HMIHY). These utterances are the customer side of live English conversations between AT&T residential customers and an automated customer care system. This system is open to the public so the number of speakers is large (several thousand).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The total number of training utterances was 40,106. All tests use 9724 test utterances. Average utterance length was 11.19 words; there were 56 classes, with an average of 1.09 classes per utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "2. Text-to-Speech Help Desk utterances (TTSHD). This is a smaller database of utterances in which customers called an automated information system primarily to find out about AT&T Natural Voices text-to-speech synthesis products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The total number of possible training utterances was 10,470. All tests use 5005 test utterances. Average utterance length was 3.95 words; there were 54 classes, with an average of 1.23 classes per utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "The phone sequences used for testing and training are those obtained using the phone recognizer described in section 2. Since the phone recognizer is trained without labeling of any sort, we can use all available training utterances to train it, that is, 40,106 in the HMIHY domain and 10,470 in the TTSHD domain. The initial model used to start the iteration is, as in (Alshawi, 2003) , an unweighted phone loop.",
"cite_spans": [
{
"start": 370,
"end": 385,
"text": "(Alshawi, 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phone sequences",
"sec_num": "4.2"
},
{
"text": "For the experiments reported here we use the BoosTexter classifier (Schapire and Singer, 2000) . The features used were identifiers corresponding to prompts, and phone n-grams up to length 4. Following Schapire and Singer (2000) , the confidence level for a given prediction is taken to be the difference between the scores assigned by BoosTexter to the highest ranked action (the predicted action) and the next highest ranked action.",
"cite_spans": [
{
"start": 67,
"end": 94,
"text": "(Schapire and Singer, 2000)",
"ref_id": "BIBREF2"
},
{
"start": 202,
"end": 228,
"text": "Schapire and Singer (2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier",
"sec_num": "4.3"
},
{
"text": "Subsets of the recognized phone sequences were selected to be assigned class labels and used in training the classifiers. Examples were selected in order of BoosTexter confidence score, least confident first. Further selection by utterance length was also used in some experiments such that only recognized utterances with less than a given number of phones were selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection criteria",
"sec_num": "4.4"
},
{
"text": "We are interested in comparing the performance for a given amount of labeling effort of classifiers trained on random selection of examples with that of classifiers trained on examples chosen according to the confidencebased method described in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "The basic measurements are: A(e): the classification accuracy at a given labeling effort level e of the classifier trained on actively selected labeling examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "R(e): the classification accuracy at a given labeling effort level e of the classifier trained on randomly selected labeling examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "A \u22121 (R(e)): the effort required to achieve the performance of random selection at effort e, using active learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "Derived from these is the main comparison we are interested in: EffortRatio(e) = A \u22121 (R(e))/e: the proportion of the effort that would be required to achieve the performance of random selection at effort e, actually required using active learning: that is, low is good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Effort A R A \u22121 (R) Effort (utt) (%) (%)",
"eq_num": "("
}
],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "We use two metrics for labeling effort: the number of utterances to be labeled and the number of phones in those utterances. The number of phones is indicative of the length of the audio file that must be listened to in order to make the class label assignment, so this is relevant to assessing just how much real effort is saved by any active learning technique. Table 1 gives the results for selected levels of labeling effort in the HMIHY domain, calculated in terms of number of utterances labeled.",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation metrics",
"sec_num": "5.1"
},
{
"text": "These results suggest that we can achieve the same accuracy as random labeling with around 60% of the effort by active selection of examples according to the confidence-based method described in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "However, a closer inspection of the chosen examples reveals that, on average, the actively selected utterances are nearly 1.5 times longer than the random selection in terms of number of phones. (This is not suprising given that the classification method performs much worse on longer utterances, and the confidence levels reflect this.) In order to overcome this we introduce as part of the selection criteria a length limit of 50 phones. This allows us to retain appreciable effort savings as shown in table 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "The TTSHD application is considerably less complex than HMIHY, and this may be reflected in the greater savings obtained using active learning. Tables 3 and 4 show the corresponding results for this domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 158,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "There is also a smaller variation in utterance length between actively and randomly selected training examples (more like 110% than the 150% for HMIHY); table 4 shows that defining effort in terms of number of phones still results in appreciable savings for active learning. (In- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Effort A R A \u22121 (R) Effort (utt) (%) (%)",
"eq_num": "("
}
],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "By actively choosing the examples with the lowest confidence scores first, we can get the same classification results with around 60-70% of the utterances labeled in HMIHY and TTSHD. But we want to optimize labeling effort, which is presumably some combination of a fixed amount of effort per utterance plus a \"listening effort\" proportional to utterance length. We therefore augmented our active learning selection to include a constraint on the length of the utterances, measured in recognized phones. If we simply take effort to be proportional to the number of phones in the utterances selected (likely to result in a conservative estimate of savings), the effort reduction at 4,000 utterances is around 30% even for the more complex HMIHY domain. Further investigation is needed into the best way to measure overall labeling effort, and into refinements of the active learning process to optimize that labeling effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Effective utterance classification with unsupervised phonotactic models",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
}
],
"year": 2003,
"venue": "HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi. 2003. Effective utterance classification with unsupervised phonotactic models. In HLT-NAACL 2003, Edmonton, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Employing EM in pool-based active learning for text classification",
"authors": [
{
"first": "A",
"middle": [
"K"
],
"last": "Mccallum",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 15th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "350--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. K. McCallum and K. Nigam. 1998. Employing EM in pool-based active learning for text classification. In Proceedings of the 15th International Conference on Machine Learning, pages 350-358.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BoosTexter: A boosting-based system for text categorization. Machine Learning",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "39",
"issue": "",
"pages": "135--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire and Y. Singer. 2000. BoosTexter: A boosting-based system for text categorization. Ma- chine Learning, 39(2/3):135-168.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Active learning for spoken language understanding",
"authors": [
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of International Conference on Acoustics, Speech and Signal Processing (ICASSP'03)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gokhan Tur, Robert E. Schapire, , and Dilek Hakkani- Tur. 2003. Active learning for spoken language un- derstanding. In Proceedings of International Con- ference on Acoustics, Speech and Signal Processing (ICASSP'03), Hong Kong, April. (to appear).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Train an initial classifier C on the pairs (phone string, class label) of S. 4. Run the classifier on the recognized phone strings of the training corpus, obtaining confidence scores for each classification. 5. While labeling effort is available, or until performance on a development corpus reaches some threshold, (a) Choose the next subset S of examples from of the training corpus, on the basis of the confidence scores or other indicators. (Selection criteria are discussed later.) (b) Assign class label(s) to each selected example. (c) Train classifier C on all the data labeled so far. (d) Run C on the whole training corpus, obtaining confidence scores for each classification. (e) Optionally test C on a separate test corpus."
},
"TABREF1": {
"content": "<table/>",
"num": null,
"text": "HMIHY, length limited, effort is number of phones",
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>Effort</td><td>A</td><td>R</td><td colspan=\"2\">A \u22121 (R) Effort</td></tr><tr><td colspan=\"3\">(phn) (%) (%)</td><td>(phn)</td><td>Ratio</td></tr><tr><td colspan=\"3\">35877 78.9 77.9</td><td>27019</td><td>0.75</td></tr><tr><td colspan=\"3\">71338 80.3 79.1</td><td>48267</td><td>0.68</td></tr></table>",
"num": null,
"text": "TTSHD, effort is number of utterances",
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>: TTSHD, effort is number of phones</td></tr><tr><td>corporating a length limit gave little additional benefit</td></tr><tr><td>here.)</td></tr></table>",
"num": null,
"text": "",
"type_str": "table",
"html": null
}
}
}
}