Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:20.997144Z"
},
"title": "Effective Utterance Classification with Unsupervised Phonotactic Models",
"authors": [
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": "",
"affiliation": {
"laboratory": "AT&T Labs -Research Florham Park",
"institution": "",
"location": {
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a method for utterance classification that does not require manual transcription of training data. The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. The classification accuracy of the method is evaluated on three different spoken language system domains.",
"pdf_parse": {
"paper_id": "N03-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a method for utterance classification that does not require manual transcription of training data. The method combines domain independent acoustic models with off-the-shelf classifiers to give utterance classification performance that is surprisingly close to what can be achieved using conventional word-trigram recognition requiring manual transcription. In our method, unsupervised training is first used to train a phone n-gram model for a particular domain; the output of recognition with this model is then passed to a phone-string classifier. The classification accuracy of the method is evaluated on three different spoken language system domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A major bottleneck in building data-driven speech processing applications is the need to manually transcribe training utterances into words. The resulting corpus of transcribed word strings is then used to train applicationspecific language models for speech recognition, and in some cases also to train the natural language components of the application. Some of these speech processing applications make use of utterance classification, for example when assigning a call destination to naturally spoken user utterances Carpenter and Chu-Carroll, 1998) , or as an initial step in converting speech to actions in spoken interfaces (Alshawi and Douglas, 2001) .",
"cite_spans": [
{
"start": 521,
"end": 553,
"text": "Carpenter and Chu-Carroll, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 631,
"end": 658,
"text": "(Alshawi and Douglas, 2001)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present an approach to utterance classification that avoids the manual effort of transcribing training utterances into word strings. Instead, only the desired utterance class needs to be associated with each sample utterance. The method combines automatic training of application-specific phonotactic models together with token sequence classifiers. The accuracy of this phone-string utterance classification method turns out to be surprisingly close to what can be achieved by conventional methods involving word-trigram language models that require manual transcription. To quantify this, we present empirical accuracy results from three different call-routing applications comparing our method with conventional utterance classification using word-trigram recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work at AT&T on utterance classification without words used information theoretic metrics to discover \"acoustic morphemes\" from untranscribed utterances paired with routing destinations (Gorin et al., 1999; Levit et al., 2001; Petrovska-Delacretaz et al., 2000) . However, that approach has so far proved impractical: the major obstacle to practical utility was the low runtime detection rate of acoustic morphemes discovered during training. This led to a high false rejection rate (between 40% and 50% for 1-best recognition output) when a word-based classification algorithm (the one described by ) was applied to the detected sequence of acoustic morphemes.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Gorin et al., 1999;",
"ref_id": "BIBREF6"
},
{
"start": 216,
"end": 235,
"text": "Levit et al., 2001;",
"ref_id": "BIBREF8"
},
{
"start": 236,
"end": 270,
"text": "Petrovska-Delacretaz et al., 2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More generally, previous work using phone string (or phone-lattice) recognition has concentrated on tasks involving retrieval of audio or video (Jones et al., 1996; Foote et al., 1997; Ng and Zue, 1998; Choi et al., 1999) . In those tasks, performance of phone-based systems was not comparable to the accuracy obtainable from wordbased systems, but rather the rationale was avoiding the difficulty of building wide coverage statistical language models for handling the wide range of subject matter that a typical retrieval system, such as a system for retrieving news clips, needs to cover. In the work presented here, the task is somewhat different: the system can automatically learn to identify and act on relatively short phone subsequences that are specific to the speech in a limited domain of discourse, resulting in task accuracy that is comparable to word-based methods.",
"cite_spans": [
{
"start": 144,
"end": 164,
"text": "(Jones et al., 1996;",
"ref_id": "BIBREF7"
},
{
"start": 165,
"end": 184,
"text": "Foote et al., 1997;",
"ref_id": "BIBREF3"
},
{
"start": 185,
"end": 202,
"text": "Ng and Zue, 1998;",
"ref_id": "BIBREF10"
},
{
"start": 203,
"end": 221,
"text": "Choi et al., 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In section 2 we describe the utterance classification method. Section 3 describes the experimental setup and the data sets used in the experiments. Section 4 presents the main comparison of the performance of the method against a \"conventional\" approach using manual transcription and word-based models. Section 5 gives some concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The runtime operation of our utterance classification method is simple. It involves applying two models (which are trained as described in the next subsection): A statistical n-gram phonotactic model and a phone string classification model. At runtime, the phonotactic model is used by an automatic speech recognition system to convert a new input utterance into a phone string which is mapped to an output class by applying the classification model. (We will often refer to an output class as an \"action\", for example transfer to a specific call-routing destination). The configuration at runtime is as shown in Figure 1 . More details about the specific recognizer and classifier components used in our experiments are given in the Section 3. The classifier can optionally make use of more information about the context of an utterance to improve the accuracy of mapping to actions. As noted in Figure 1 , in the experiments presented here, we use a single additional feature as a proxy for the utterance context, specifically, the identity of the spoken prompt that elicited the utterance. It should be noted, however, that inclusion of such additional information is not central to the method: Whether, and how much, context information to include to improve classification accuracy will depend on the application. Other candidate aspects of context may include the dialog state, the day of week, the role of the speaker, and so on.",
"cite_spans": [],
"ref_spans": [
{
"start": 613,
"end": 621,
"text": "Figure 1",
"ref_id": null
},
{
"start": 897,
"end": 905,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Runtime Operation",
"sec_num": "2.1"
},
{
"text": "Training is divided into two phases. First, train a phone n-gram model using only the training utterance speech files and a domain-independent acoustic model. Second, train a classification model mapping phone strings and prompts (the classifier inputs) to actions (the classifier outputs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "The recognition training phase is an iterative procedure in which a phone n-gram model is refined successively: The phone strings resulting from the current pass over the speech files are used to construct the phone ngram model for the next iteration. In other words, this is a \"Viterbi re-estimation\" or \"1-best re-estimation\" process. We currently only re-estimate the n-gram model, so the same general-purpose HMM acoustic model is used for ASR decoding in all iterations. Other more expensive n-gram re-estimation methods can be used instead, including ones in which successive n-gram models are re-estimated from n-best or lattice ASR output. Candidates for the initial model used in this procedure are an unweighted phone loop or a general purpose phonotactic model for the language being recognized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "The steps of the training process are as follows. (The procedure is depicted in Figure 2 .) Figure 2 : Utterance classifier training procedure 1. Set the phone string model G to an initial phone string model. Initialize the n-gram order N to 1. (Here 'order' means the size of the n-grams, so for example 2 means bi-grams.)",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 2",
"ref_id": null
},
{
"start": 92,
"end": 100,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "2. Set S to the set of phone strings resulting from recognizing the training speech files with G (after possibly adjusting the insertion penalty, as explained below).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "3. Estimate an n-gram model G of order N from the set of strings S.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "4. If N < N max , set N \u2190 N + 1 and G \u2190 G and go to step 2, otherwise continue with step 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "5. For each recognized string s \u2208 S, construct a classifier input pair (s, r) where r is the prompt that elicited the utterance recognized as s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "6. Train a classification model M to generalize the training function f : (s, r) \u2192 a, where a is the action associated with the utterance recognized as s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "7. Return the classifier model M and the final n-gram model G as the results of the training procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "Instead of increasing the order N of the phone n-gram model during re-estimation, an alternative would be to iterate N max times with a fixed n-gram order, possibly with successively increased weight being given to the language model vs. the acoustic model in ASR decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "One issue that arises in the context of unsupervised recognition without transcription is how to adjust recognition parameters that affect the length of recognized strings. In conventional training of recognizers from word transcriptions, a \"word insertion penalty\" is typically tuned after comparing recognizer output against transcriptions. To address this issue, we estimate the expected speaking rate (in phones per second) for the relevant type of speech (human-computer interaction in these experiments). The token insertion penalty of the recognizer is then adjusted so that the speaking rate for automatically detected speech in a small sample of training data approximates the expected speaking rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Procedure",
"sec_num": "2.2"
},
{
"text": "Three collections of utterances from different domains were used in the experiments. Domain A is the one studied in previously cited experiments (Gorin et al., 1999; Levit et al., 2001; Petrovska-Delacretaz et al., 2000) . Utterances for domains B and C are from similar interactive spoken natural language systems. Domain A. The utterances being classified are the customer side of live English conversations between AT&T residential customers and an automated customer care system. This system is open to the public so the number of speakers is large (several thousand). There were 40106 training utterances and 9724 test utterances. The average length of an utterance was 11.29 words. The split between training and test utterances was such that the utterances from a particular call were either all in the training set or all in the test set. There were 56 actions in this domain. Some utterances had more than one action associated with them, the average number of actions associated with an utterance being 1.09.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Gorin et al., 1999;",
"ref_id": "BIBREF6"
},
{
"start": 166,
"end": 185,
"text": "Levit et al., 2001;",
"ref_id": "BIBREF8"
},
{
"start": 186,
"end": 220,
"text": "Petrovska-Delacretaz et al., 2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Domain B. This is a database of utterances from an interactive spoken language application relating to product line information. There were 10470 training utterances and 5005 test utterances. The average length of an utterance was 3.95 words. There were 54 actions in this domain. Some utterances had more than one action associated with them, the average number of actions associated with an utterance being 1.23.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Domain C. This is a database of utterances from an interactive spoken language application relating to consumer order transactions (reviewing order status, etc.) in a limited domain. There were 14355 training utterances and 5000 test utterances. The average length of an utterance was 8.88 words. There were 93 actions in this domain. Some utterances had more than one action associated with them, the average number of actions associated with an utterance being 1.07.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The same acoustic model was used in all the experiments reported here, i.e. for experiments with both the phonebased and word-based utterance classifiers. This model has 42 phones and uses discriminatively trained 3-state HMMs with 10 Gaussians per state. It uses feature space transformations to reduce the feature space to 60 features prior to discriminative maximum mutual information training. This acoustic model was trained by Andrej Ljolje and is similar to the baseline acoustic model used for experiments with the Switchboard corpus, an earlier version of which is described by Ljolje et al. (2000) . (Like the model used here, the baseline model in those experiments does not involve speaker and environment normalizations.)",
"cite_spans": [
{
"start": 587,
"end": 607,
"text": "Ljolje et al. (2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizer",
"sec_num": "3.2"
},
{
"text": "The n-gram phonotactic models used were represented as weighted finite state automata. These automata (with the exception of the initial unweighted phone loop) were constructed using the stochastic language modeling technique described by Riccardi et al. (1996) . This modeling technique, which includes a scheme for backing off to probability estimates for shorter n-grams, was originally designed for language modeling at the word level.",
"cite_spans": [
{
"start": 239,
"end": 261,
"text": "Riccardi et al. (1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognizer",
"sec_num": "3.2"
},
{
"text": "Different possible classification algorithms can be used in our utterance classification method. For the experiments reported here we use the BoosTexter classifier (Schapire and Singer, 2000) . Among the alternatives are decision trees (Quinlan, 1993) and support vector machines (Vapnik, 1995) . BoosTexter was originally designed for text categorization. It uses the AdaBoost algorithm (Freund and Schapire, 1997; Schapire, 1999) , a wide margin machine learning algorithm. At training time, AdaBoost selects features from a specified space of possible features and associates weights with them. A distinguishing characteristic of the AdaBoost algorithm is that it places more emphasis on training examples that are difficult to classify. The algorithm does this by iterating through a number of rounds: at each round, it imposes a distribution on the training data that gives more probability mass to examples that were difficult to classify in the previous round. In our experiments, 500 rounds of boosting were used; each round allows the selection of a new feature and the adjustment of weights associated with existing features. In the experiments, the possible features are identifiers corresponding to prompts, and phone n-grams or word n-grams (for the phone and word-based methods respectively) up to length 4.",
"cite_spans": [
{
"start": 164,
"end": 191,
"text": "(Schapire and Singer, 2000)",
"ref_id": "BIBREF14"
},
{
"start": 236,
"end": 251,
"text": "(Quinlan, 1993)",
"ref_id": "BIBREF12"
},
{
"start": 280,
"end": 294,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF16"
},
{
"start": 388,
"end": 415,
"text": "(Freund and Schapire, 1997;",
"ref_id": "BIBREF4"
},
{
"start": 416,
"end": 431,
"text": "Schapire, 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classifier",
"sec_num": "3.3"
},
{
"text": "Three experimental conditions are considered. The suffixes (M and H) in the condition names refer to whether the two training phases (i.e. training for recognition and classification respectively) use inputs produced by machine (M) or human (H) processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "3.4"
},
{
"text": "PhonesMM This experimental condition is the method described in this paper, so no human transcriptions are used. Unsupervised training from the training speech files is used to build a phone recognition model. The classifier is trained on the phone strings resulting from recognizing the training speech files with this model. At runtime, the classifier is applied to the results of recognizing the test files with this model. The initial recogition model for the unsupervised recognition training process was an unweighted phone loop. The final n-gram order used in the recognition training procedure (N max in section 2) was 5. For all three conditions, median recognition and classification time for test data was less than real time (i.e. the duration of test speech files) on current micro-processors. As noted earlier, the acoustic model, the number of boosting rounds, and the use of prompts as an additional classification feature, are the same for all experimental conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Conditions",
"sec_num": "3.4"
},
{
"text": "To give an impression of the kind of phone sequences resulting from the automatic training procedure and applied by the classifier at runtime, see ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example learned phone sequences",
"sec_num": "3.5"
},
{
"text": "In this section we compare the accuracy of our phonestring utterance classification method (PhonesMM) with methods (WordsHM and WordsHH) using manual transcription and word string models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Accuracy",
"sec_num": "4"
},
{
"text": "The results are presented as utterance classification rates, specifically the percentage of utterances in the test set for which the predicted action is valid. Here a valid prediction means that the predicted action is the same as one of the actions associated with the test utterance by a human labeler. (As noted in section 3, the average number of actions associated with an utterance was 1.09, 1.23, and 1.07 for domains A, B, and C, respectively.) In this metric we only take into account a single action predicted by the classifier, i.e. this is \"rank 1\" classification accuracy, rather than the laxer \"rank 2\" classification accuracy (where the classifier is allowed to make two predictions) reported by Gorin et. al (1999) and Petrovska et. al (2000) . In practical applications of utterance classification, user inputs are rejected if the confidence of the classifier in making a prediction falls below a threshold appropriate to the application. After rejection, the system may, for example, route the call to a human or reprompt the user. We therefore show the accuracy of classifying accepted utterances at different rejection rates, specifically 0% (all utterances accepted), 10%, 20%, 30%, 40%, and 50%. Following Schapire and Singer (2000) , the confidence level, for rejection purposes, assigned to a prediction is taken to be the difference between the scores assigned by BoosTexter to the highest ranked action (the predicted action) and the next highest ranked action.",
"cite_spans": [
{
"start": 711,
"end": 730,
"text": "Gorin et. al (1999)",
"ref_id": "BIBREF6"
},
{
"start": 735,
"end": 758,
"text": "Petrovska et. al (2000)",
"ref_id": "BIBREF11"
},
{
"start": 1228,
"end": 1254,
"text": "Schapire and Singer (2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Metric",
"sec_num": null
},
{
"text": "Utterance classification accuracy rates, at various rejection rates, for domain A are shown in Table 2 for the three experimental conditions described in section 3.4. The corresponding results for domains B and C are shown in Tables 3 and 4 Table 4 : Phone-based and word-based utterance classification accuracy for domain C",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": null
},
{
"start": 226,
"end": 240,
"text": "Tables 3 and 4",
"ref_id": "TABREF4"
},
{
"start": 241,
"end": 248,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": null
},
{
"text": "The utterances in domain A are on average longer and more complex than in domain B; this may partly explain the higher classification rates for domain B. The generally lower classification accuracy rates for domain C may reflect the larger set of actions for this domain (92 actions, compared with 56 and 54 actions for domains A and B). Another difference between the domains was that the recording quality for domain B was not as high as for domains A and C. Despite these differences between the domains, there is a consistent pattern for the comparison of most interest to this paper, i.e. the relative performance of utterance classification methods requiring or not requiring transcription.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": null
},
{
"text": "Perhaps the most surprising outcome of these experiments is that the phone-based method with short \"phrasal\" contexts (up to four phones) has classification accuracy that is so close to that provided by the longer phrasal contexts of trigram word recognition and word-string classification. Of course, the re-estimation of phone n-grams employed in the phone-based method means that two-word units are implicitly modeled since the phone 5-grams modeled in recognition, and 4-grams in classification, can straddle word boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": null
},
{
"text": "The experiments suggest that if transcriptions are available (i.e. the effort to produce them has already been expended), then they can be used to slightly improve performance over the phone-based method (PhonesMM) not requiring transcriptions. For domains A and C, this would give an absolute performance difference of about 2%, while for domain B the difference is around 1%. Whether it is optimal to train the word-based classifier on the transcriptions (WordsHH) or the output of the recognizer (WordsHM) seems to depend on the particular data set. When the operational setting of utterance classification demands very high confidence, and a high degree of rejection is acceptable (e.g. if sufficient human backup operators are available), then the small advantage of the word-based methods is reduced further to less than 1%. This can be seen from the high rejection rate rows of the accuracy tables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Accuracy Results",
"sec_num": null
},
{
"text": "Tables 5, 6, and 7, show the effect of increasing N max (the final iteration number in the unsupervised phone recognition model) for domains A, B and C, respectively. The row with N max = 0 corresponds to the initial unweighted phone loop recognition. The classification accuracies shown in this table are all at 0% rejection. Phone recognition accuracy is the standard ASR error rate accuracy in terms of the percentage of phone insertions, deletions, and substitutions, determined by aligning the ASR output against reference phone transcriptions produced by the pronounciation component of our speech synthesizer. (Since these reference phone transcriptions are not perfect, the actual phone recognition accuracy is probably slightly higher.) Clearly, for all three domains, unsupervised recognition model training improves both recognition and classification accuracy compared with a simple phone loop. Unsupervised training of the recognition model is particularly important for domain B where the quality of recordings is not as high as for domains A and C, so the system needs to depend more on the reestimated n-gram models to achieve the final classification accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Unsupervised Training",
"sec_num": null
},
{
"text": "In this paper we have presented an utterance classification method that does not require manual transcription of training data. The method combines unsupervised reestimation of phone n-ngram recognition models together with a phone-string classifier. The utterance classification accuracy of the method is surprisingly close to a more traditional method involving manual transcription of training utterances into word strings and recognition with word trigrams. The measured absolute difference in classification accuracy (with no rejection) between our method and the word-based method was only 1% for one test domain and 2% for two other test domains. The performance difference is even smaller (less than 1%) if high rejection thresholds are acceptable. This performance level was achieved despite the large reduction in effort required to develop new applications with the presented utterance classification method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding Remarks",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Variant transduction: A method for rapid development of interactive spoken interfaces",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Douglas",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the SIGDial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi and S. Douglas. 2001. Variant transduction: A method for rapid development of interactive spoken interfaces. In Proceedings of the SIGDial Workshop on Discourse and Dialogue, Aalborg, Denmark, Septem- ber.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural language call routing: a robust, self-organizing approach",
"authors": [
{
"first": "R",
"middle": [],
"last": "Carpenter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chu-Carroll",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the International Conference on Speech and Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Carpenter and J. Chu-Carroll. 1998. Natural language call routing: a robust, self-organizing approach. In Proceedings of the International Conference on Speech and Language Processing, Sydney, Australia.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spoken content-based audio navigation (scan)",
"authors": [
{
"first": "J",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Whittaker",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of ICPhS-99 (International Congress of Phonetics Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Choi, D. Hindle, J. Hirschberg, F. Pereira, A. Singhal, and S. Whittaker. 1999. Spoken content-based audio navigation (scan). In Proceedings of ICPhS-99 (In- ternational Congress of Phonetics Sciences, San Fran- cisco, California, August.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unconstrained keyword spotting using phone lattices with application to spoken document retrieval",
"authors": [
{
"first": "J",
"middle": [
"T"
],
"last": "Foote",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
},
{
"first": "G",
"middle": [
"J"
],
"last": "Jones",
"suffix": ""
},
{
"first": "K. Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1997,
"venue": "Computer Speech and Language",
"volume": "11",
"issue": "2",
"pages": "207--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. T. Foote, S. J. Young, G. J. F Jones, and K. Sparck Jones. 1997. Unconstrained keyword spotting using phone lattices with application to spoken document re- trieval. Computer Speech and Language, 11(2):207- 224.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A decision-theoretic generalization of on-line learning and an application to boosting",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Computer and System Sciences",
"volume": "55",
"issue": "1",
"pages": "119--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Freund and R. E. Schapire. 1997. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How may I help you? Speech Communication",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "23",
"issue": "",
"pages": "113--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Gorin, G. Riccardi, and J. H. Wright. 1997. How may I help you? Speech Communication, 23(1- 2):113-127.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning Spoken Language without Transcription",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Petrovska-Delacretaz",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the ASRU Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Gorin, D. Petrovska-Delacretaz, G. Riccardi, and J. H. Wright. 1999. Learning Spoken Language with- out Transcription. In Proceedings of the ASRU Work- shop, Keystone, Colorado, December.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Experiments in spoken document retrieval",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "G",
"middle": [
"J F"
],
"last": "Jones",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Foote",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1996,
"venue": "Information Processing and Management",
"volume": "32",
"issue": "4",
"pages": "399--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sparck Jones, G. J. F. Jones, J. T. Foote, and S. J. Young. 1996. Experiments in spoken document retrieval. Information Processing and Management, 32(4):399-417.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Multipass Algorithm for Acquisition of Salient Acoustic Morphemes",
"authors": [
{
"first": "M",
"middle": [],
"last": "Levit",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Eurospeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Levit, A. L. Gorin, and J. H. Wright. 2001. Mul- tipass Algorithm for Acquisition of Salient Acoustic Morphemes. In Proceedings of Eurospeech 2001, Aal- borg, Denmark, September.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The AT&T LVCSR-2000 System",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Hindle",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Riley",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Sproat",
"suffix": ""
}
],
"year": 2000,
"venue": "Speech Transcription Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ljolje, D. M. Hindle, M. D. Riley, and R. W. Sproat. 2000. The AT&T LVCSR-2000 System. In Speech Transcription Workshop, Univ. of Maryland, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Phonetic recognition for spoken document retrieval",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Zue",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of ICASSP 98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Ng and V. Zue. 1998. Phonetic recognition for spo- ken document retrieval. In Proceedings of ICASSP 98, Seattle, Washington, May.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Detecting Acoustic Morphemes in Lattices for Spoken Language Understanding",
"authors": [
{
"first": "D",
"middle": [],
"last": "Petrovska-Delacretaz",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Interanational Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Petrovska-Delacretaz, A. L. Gorin, J. H. Wright, and G. Riccardi. 2000. Detecting Acoustic Morphemes in Lattices for Spoken Language Understanding. In Pro- ceedings of the Interanational Conference on Spoken Language Processing, Beijing, China, October.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "C4.5: Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Stochastic automata for language modeling",
"authors": [
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Bocchieri",
"suffix": ""
}
],
"year": 1996,
"venue": "Computer Speech and Language",
"volume": "10",
"issue": "",
"pages": "265--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Riccardi, R. Pieraccini, and E. Bocchieri. 1996. Stochastic automata for language modeling. Computer Speech and Language, 10:265-293.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BoosTexter: A boosting-based system for text categorization. Machine Learning",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "39",
"issue": "",
"pages": "135--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire and Y. Singer. 2000. BoosTexter: A boosting-based system for text categorization. Ma- chine Learning, 39(2/3):135-168.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A brief introduction to boosting",
"authors": [
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. E. Schapire. 1999. A brief introduction to boost- ing. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Nature of Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer, New York.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic acquisition of salient grammar fragments for call-type classification",
"authors": [
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
},
{
"first": "A",
"middle": [
"L"
],
"last": "Gorin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1419--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. H. Wright, A. L. Gorin, and G. Riccardi. 1997. Au- tomatic acquisition of salient grammar fragments for call-type classification. In Proceedings of European Conference on Speech Communication and Technol- ogy, pages 1419-1422, Rhodes, Greece, September.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Figure 1: Utterance classifier runtime operation",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"text": "Example phone sequences learned by the training procedure from domain A training speech files.",
"content": "<table><tr><td>WordsHM Human transcriptions of the training speech</td></tr><tr><td>files are used to build a word trigram model. The</td></tr><tr><td>classifier is trained on the word strings resulting</td></tr><tr><td>from recognizing the training speech files with this</td></tr><tr><td>word trigram model. At runtime, the classifier is ap-</td></tr><tr><td>plied to the results of recognizing the test files with</td></tr><tr><td>the word trigram model.</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"num": null,
"text": "The table lists some examples of such phone strings learned from domain A training speech files, together with English words, or parts of words (shown in bold type), they may correspond to. (Of course, the words play no part in the method and are only included for expository purposes.) The phone strings are shown in the DARPA phone alphabet.",
"content": "<table><tr><td colspan=\"4\">Rejection PhoneMM WordHM WordHH</td></tr><tr><td>rate (%)</td><td>accuracy</td><td colspan=\"2\">accuracy accuracy</td></tr><tr><td>0</td><td>74.6</td><td>76.2</td><td>77.0</td></tr><tr><td>10</td><td>79.5</td><td>81.1</td><td>81.5</td></tr><tr><td>20</td><td>84.4</td><td>85.8</td><td>86.2</td></tr><tr><td>30</td><td>89.4</td><td>90.5</td><td>90.9</td></tr><tr><td>40</td><td>94.1</td><td>94.7</td><td>94.4</td></tr><tr><td>50</td><td>97.2</td><td>97.3</td><td>96.7</td></tr><tr><td colspan=\"4\">Table 2: Phone-based and word-based utterance classifi-</td></tr><tr><td colspan=\"3\">cation accuracy for domain A</td><td/></tr></table>",
"type_str": "table",
"html": null
},
"TABREF4": {
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">: Phone-based and word-based utterance classifi-</td></tr><tr><td colspan=\"3\">cation accuracy for domain B</td><td/></tr><tr><td colspan=\"4\">Rejection PhoneMM WordHM WordHH</td></tr><tr><td>rate (%)</td><td>accuracy</td><td colspan=\"2\">accuracy accuracy</td></tr><tr><td>0</td><td>68.2</td><td>68.9</td><td>69.9</td></tr><tr><td>10</td><td>73.3</td><td>73.7</td><td>74.9</td></tr><tr><td>20</td><td>78.9</td><td>79.2</td><td>80.2</td></tr><tr><td>30</td><td>84.8</td><td>84.7</td><td>85.5</td></tr><tr><td>40</td><td>89.7</td><td>89.3</td><td>90.2</td></tr><tr><td>50</td><td>94.1</td><td>93.3</td><td>94.5</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF6": {
"num": null,
"text": "Phone recognition accuracy and phone string classification accuracy (PhoneMM with no rejection) for increasing values of N max for domain B.",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}