Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H89-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:32:34.868661Z"
},
"title": "SPEAKER INDEPENDENT PHONETIC TRANSCRIPTION OF FLUENT SPEECH FOR LARGE VOCABULARY SPEECH RECOGNITION",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"country": "New Jersey"
}
},
"email": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Liberman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"country": "New Jersey"
}
},
"email": ""
},
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"country": "New Jersey"
}
},
"email": ""
},
{
"first": "L",
"middle": [
"G"
],
"last": "Miller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Bell Laboratories Murray Hill",
"location": {
"postCode": "07974",
"country": "New Jersey"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Speaker independent phonetic Iranscription of fluent speech is performed using an ergodic continuously variable duration hidden Markov model (CVDHMM) to represent the acoustic, phonetic and phonotactic structure of speech. An important property of the model is that each of its fifty-one states is uniquely identified with a single phonetic unit. Thus, for any spoken utterance, a phonetic transcription is obtained from a dynamic programming (DP) procedure for finding the state sequence of maximum likelihood. A model has been constructed based on 4020 sentences from the TIMIT database. When tested on 180 different sentences from this database, phonetic accuracy was observed to be 56% with 9% insertions. A speaker dependent version of the model was also constructed. The transcription algorithm was then combined with lexical access and parsing routines to form a complete recognition system. When tested on sentences from the DARPA resource management task spoken over the local switched telephone network, phonetic accuracy of 64% with 8% insertions and word accuracy of 87% with 3% insertions was measured. This system is presently operating in an on-line mode over the local switched telephone network in less than ten times real time on an Alliant FX-80.",
"pdf_parse": {
"paper_id": "H89-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "Speaker independent phonetic Iranscription of fluent speech is performed using an ergodic continuously variable duration hidden Markov model (CVDHMM) to represent the acoustic, phonetic and phonotactic structure of speech. An important property of the model is that each of its fifty-one states is uniquely identified with a single phonetic unit. Thus, for any spoken utterance, a phonetic transcription is obtained from a dynamic programming (DP) procedure for finding the state sequence of maximum likelihood. A model has been constructed based on 4020 sentences from the TIMIT database. When tested on 180 different sentences from this database, phonetic accuracy was observed to be 56% with 9% insertions. A speaker dependent version of the model was also constructed. The transcription algorithm was then combined with lexical access and parsing routines to form a complete recognition system. When tested on sentences from the DARPA resource management task spoken over the local switched telephone network, phonetic accuracy of 64% with 8% insertions and word accuracy of 87% with 3% insertions was measured. This system is presently operating in an on-line mode over the local switched telephone network in less than ten times real time on an Alliant FX-80.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Though rarely explicitly stated, a fundamental assumption on which many speech recognition systems are implicitly based is that speech is literate. That is, it is a code for communication having a small number of discrete phonetic symbols in its alphabet. These symbols are, however, merely mental constructs and, as such, are not directly accessible but are, instead, observable only in their highly variable acoustic manifestation. It is also well-known but equally seldom expressed that a hidden Markov model comprises a finite set of discrete inaccessible states observable only via a set of random processes, one associated with each hidden state. When these two simple ideas are juxtaposed, it seems to us inescapable that the most natural representation of speech by a hidden Markov model is one in which the hypothetical phonetic symbols are identified with the hidden states of the Markov chain and the variability of the measurable acoustic signal is captured by the observable, state-dependent random processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The mathematical details of just such a model are given in [6] . Its application to a smallvocabulary continuous speech recognition system and a large-vocabulary isolated word recognition system are described in [7] and [8] , respectively. Here we present a brief overview of the use of this approach in large vocabulary continuous speech recognition and some preliminary results of two experiments performed with it on the TIMIT [4] and DARPA [9] databases.",
"cite_spans": [
{
"start": 59,
"end": 62,
"text": "[6]",
"ref_id": null
},
{
"start": 212,
"end": 223,
"text": "[7] and [8]",
"ref_id": null
},
{
"start": 444,
"end": 447,
"text": "[9]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "We have constructed two models, a 51 state model on which the speaker-independent phonetic transcription results are based, and a 43 state model on which the speaker-dependent recognition of sentences from the DARPA resource management task are founded. The 51 states in the first model correspond to 51 of the phonetic symbols used in the standard transcriptions of the TIMIT sentences. , and the last two, log energy and its time-derivative, respectively. The temporal structure of the acoustic signal is reflected in the durational densities which are of the two-parameter gamma family. Because of the presence of the durational densities, selftransitions are forbidden.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MODEL",
"sec_num": null
},
{
"text": "The parameters for both the 51 and 43 state models were estimated in the same way although on different training data. In both cases, the state transition matrix was computed from bigram statistics extracted from the Collins dictionary. No attempt was made to count bigrams resulting from word junctures. Also, in both cases, the respective databases were segmented by hand and labeled with respect to the appropriate phonetic alphabet. Acoustic observations were sorted into sets corresponding to the phonetic symbols. The necessary parameters, spectral means and covariances and durational means and standard deviations, were then calculated for each set independently. No parameter optimization was applied to these estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARAMETER ESTIMATION",
"sec_num": null
},
{
"text": "The 51 state speaker-independent model was trained on 4200 sentences of TIMIT data. Ten different sentences were selected from each of 402 different speakers. The state speakerdependent model was trained on one reading of the 450 sentences in the TIMIT phonetically balanced list by a single male speaker. These utterances were recorded over the local switched telephone network with a conventional telephone handset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARAMETER ESTIMATION",
"sec_num": null
},
{
"text": "At this writing, we have yet to train a speaker-independent model using the DARPA training material. Although we expect to do so, we are concerned about its utility since the phonetic contexts in this database are rather restrictive compared with those of the TIMIT sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARAMETER ESTIMATION",
"sec_num": null
},
{
"text": "Phonetic transcription is accomplished by means of a DP technique for finding the state sequence that maximizes the joint likelihood of state, duration and observation sequences. The details of this algorithm are given in [7] . Note that this procedure makes no use of lexical or syntactic structure. The algorithm runs in approximately twice real time on an Alliant FX-80.",
"cite_spans": [
{
"start": 222,
"end": 225,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PHONETIC TRANSCRIPTION",
"sec_num": null
},
{
"text": "The transcription algorithm was tested on 180 sentences from the TIMIT database. Neither the sentences nor the speakers were used in ihe training. Transcription accuracy was determined by computing the Levenshtein distance between the derived transcription and the standard transcription supplied with the database. By this measure, the 51 state model yielded a phonetic recognition rate of 56% with a 9% insertion rate. The 43 state model resulted in a 64% recognition rate with an 8% insertion rate on 48 sentences from the DARPA task collected from the male speaker on whose speech the model had been trained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENTAL RESULTS ON TRANSCRIPTION",
"sec_num": null
},
{
"text": "The reader should bear in mind that these are the very first experiments performed with this system. We fully expect that the performance will improve greatly as a result of refinements we are presently making to the model. These include accounting for coarticulation, making the durational densities more faithful and using parameter reestimation techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXPERIMENTAL RESULTS ON TRANSCRIPTION",
"sec_num": null
},
{
"text": "The phonetic transcription algorithm described above is the first stage of a complete speech recognition system. The architecture of the system is unchanged from that described in [8] but the details of the lexical access procedure and the parser are utterly different from those given in the reference.",
"cite_spans": [
{
"start": 180,
"end": 183,
"text": "[8]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH RECOGNITION SYSTEM",
"sec_num": null
},
{
"text": "The lexical access procedure is simply that of computing the likelihood of every word in the lexicon over every sub-interval of the observation sequence. We define the likelihood of a word on an observation sub-sequence to be the joint likelihood of the standard phonetic transcription for that word as given in the lexicon and the phonetic transcription of that subsequence provided by the transcription algorithm. Because the standard transcription need not have the same length as the one computed for an arbitrary observation sub-sequence, the calculation is carried out by means of a DP algorithm. Note that this procedure is synchronized at the segment rate, not the frame rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH RECOGNITION SYSTEM",
"sec_num": null
},
{
"text": "The parser takes as input, the word lattice constructed by the lexical access procedure and finds the well-formed sentence of maximum likelihood. Here, well-formed means with respect to the strict DARPA resource management task grammar. This is a finite state grammar having 4767 states, 60433 state transitions, 90 final states and a maximum entropy of 4.4 bits/word. The parser itself is yet another DP algorithm. The search it effects is not pruned in any way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH RECOGNITION SYSTEM",
"sec_num": null
},
{
"text": "The system has been tested in an on-line mode over the switched local telephone network. Under these conditions, we obtained an 87% correct word recognition rate and a 3% insertion rate. On an Alliant FX-80, a sentence is recognized in less than ten times real time. A sample of the recognizer output is shown in figure 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH RECOGNITION SYSTEM",
"sec_num": null
},
{
"text": "PHONETIC TRANSCRIPTION: h@riRriEZUzpl>grUDENw^ndTWz&ndSEdED&nd>rTtUsiZ &kObS&n DURATIONS: 5 5 7 4 8 8 7 51017 912 4 613 8 8 7 6 9 7 6 6 7 4 9 1910 3 6 3 11 17 5 12 4 4 5 3 9 6 7 6 7 14 5 8 3 10 12 31375 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH RECOGNITION SYSTEM",
"sec_num": null
},
{
"text": "We have presented some very early results of experiments on phonetic transcription and recognition of fluent speech based on a novel use of a hidden Markov model. While our error rates are substantially higher than those achieved by more conventional systems [5, 3, 10] , we believe that by improving the acoustic/phonetic model -the only adjustable part of the systemresults comparable to those obtained by other investigators can be realized.",
"cite_spans": [
{
"start": 259,
"end": 262,
"text": "[5,",
"ref_id": null
},
{
"start": 263,
"end": 265,
"text": "3,",
"ref_id": null
},
{
"start": 266,
"end": 269,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "On the use of Bandpass Liftering in Speech Recognition",
"authors": [
{
"first": "B",
"middle": [
"H"
],
"last": "Juang",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Wilpon",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Trans. Acoust. Speech and Signal Processing",
"volume": "35",
"issue": "7",
"pages": "947--954",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juang, B. H., Rabiner, L. R. and Wilpon, J. G., \"On the use of Bandpass Liftering in Speech Recognition\", IEEE Trans. Acoust. Speech and Signal Processing, ASSP-35 (7), pp. 947-954, July, 1987.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Continuous Speech Recognition Results of the BYBLOS System on the DARPA 1000-Word Resource Management Database",
"authors": [
{
"first": "F",
"middle": [],
"last": "Kubala",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. ICASSP-88",
"volume": "",
"issue": "",
"pages": "291--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kubala, F. et al., \"Continuous Speech Recognition Results of the BYBLOS System on the DARPA 1000-Word Resource Management Database\", Proc. ICASSP-88, New York, NY, pp. 291-294, April, 1988.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speech Database Development: Design and Analysis of the Acoustic-Phonetic Corpus",
"authors": [
{
"first": "L",
"middle": [
"F"
],
"last": "Lamel",
"suffix": ""
},
{
"first": "R",
"middle": [
"H"
],
"last": "Kassel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 1986,
"venue": "Proc. DARPA Speech Recognition Workshop",
"volume": "",
"issue": "",
"pages": "100--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lamel, L. F., Kassel, R. H. and Seneff, S., \"Speech Database Develop- ment: Design and Analysis of the Acoustic-Phonetic Corpus\", Proc. DARPA Speech Recognition Workshop, Palo Alto, CA, pp. 100-109, Feb., 1986.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Large Vocabulary Speaker-Independent Speech Recognition System using HMM",
"authors": [
{
"first": "K",
"middle": [
"F"
],
"last": "Lee",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. ICASSP-88",
"volume": "",
"issue": "",
"pages": "123--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, K. F. and Hon, H. W., \"Large Vocabulary Speaker-Independent Speech Recognition System using HMM\", Proc. ICASSP-88, New York, NY, pp. 123 126, April, 1988.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Continuously Variable Duration Hidden Markov Models for Automatic Speech Recognition",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": ""
}
],
"year": 1986,
"venue": "Computer Speech and Language",
"volume": "1",
"issue": "1",
"pages": "29--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levinson, S. E., \"Continuously Variable Duration Hidden Markov Models for Automatic Speech Recognition\", Computer Speech and Language 1 (1), pp. 29-45, 1986.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Continuous Speech Recognition by means of Acoustic-Phonetic Classification Obtained from a Hidden Markov Model",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": ""
}
],
"year": 1987,
"venue": "Proc. ICASSP-87",
"volume": "",
"issue": "",
"pages": "93--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levinson, S. E., \"Continuous Speech Recognition by means of Acoustic- Phonetic Classification Obtained from a Hidden Markov Model\", Proc. ICASSP-87, Dallas, TX, pp. 93-96, April, 1987.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Large Vocabulary Speech Recognition using a Hidden Markov Model for Acoustic Phonetic Classification",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "L",
"middle": [
"G"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. ICASSP-88",
"volume": "",
"issue": "",
"pages": "505--508",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levinson, S. E., Ljolje, A. and Miller, L. G., \"Large Vocabulary Speech Recognition using a Hidden Markov Model for Acoustic Phonetic Classif- ication\", Proc. ICASSP-88, New York, NY, pp. 505-508, April, 1988.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The DARPA 1000-Word Resource Management Database for Continuous Speech Recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pallett",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. ICASSP-88",
"volume": "",
"issue": "",
"pages": "651--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Price, P., Fisher, W., Bernstein, J. and Pallett, D., \"The DARPA 1000- Word Resource Management Database for Continuous Speech Recognition\", Proc. ICASSP-88, New York, NY, pp. 651-654, April, 1988.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Some Preliminary Results on Speaker Independent Recognition of the DARPA Resource Management Task",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pieraccini",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Wilpon",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pieraccini, R., Lee, C. H., Rabiner, L. R. and Wilpon, J. G., \"Some Preliminary Results on Speaker Independent Recognition of the DARPA Resource Management Task\", in this proceedings.",
"links": null
}
},
"ref_entries": {}
}
}