|
{ |
|
"paper_id": "1991", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:36:42.705394Z" |
|
}, |
|
"title": "PROCESSING UNKNOWN WORDS IN CONTINUOUS SPEECH RECOGNITION", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Kita", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "ATR Interpreting Telephony Research Laboratories Seika-cho", |
|
"institution": "", |
|
"location": { |
|
"addrLine": "Souraku-gun", |
|
"postCode": "619-02", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Terumasa", |
|
"middle": [], |
|
"last": "Ehara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "ATR Interpreting Telephony Research Laboratories Seika-cho", |
|
"institution": "", |
|
"location": { |
|
"addrLine": "Souraku-gun", |
|
"postCode": "619-02", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Tsuyoshi", |
|
"middle": [], |
|
"last": "Morimoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "ATR Interpreting Telephony Research Laboratories Seika-cho", |
|
"institution": "", |
|
"location": { |
|
"addrLine": "Souraku-gun", |
|
"postCode": "619-02", |
|
"settlement": "Kyoto", |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Current continuous speech recognition systems essentially ignore unknown words. Systems are designed to recognize words in the lexicon. However, for using speech recognition systems in real applications of spoken-language processing, it is very important to process unknown words. This paper proposes a contin uous speech recognition method which accepts any utterance that might include unknown words. In this method , words not in the lexicon are transcribed as phone sequences, while words in the lexicon are recognized correctly. The HMM-LR speech recognition system, which is an integration of Hidden Markov Models and generalized LR parsing, is used as the baseline system, and enhanced with the trigram model of syllables to take into account the stochastic characteristics of a language. \u2022 Preliminary re sults indicate that our approach is very promis ing.", |
|
"pdf_parse": { |
|
"paper_id": "1991", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Current continuous speech recognition systems essentially ignore unknown words. Systems are designed to recognize words in the lexicon. However, for using speech recognition systems in real applications of spoken-language processing, it is very important to process unknown words. This paper proposes a contin uous speech recognition method which accepts any utterance that might include unknown words. In this method , words not in the lexicon are transcribed as phone sequences, while words in the lexicon are recognized correctly. The HMM-LR speech recognition system, which is an integration of Hidden Markov Models and generalized LR parsing, is used as the baseline system, and enhanced with the trigram model of syllables to take into account the stochastic characteristics of a language. \u2022 Preliminary re sults indicate that our approach is very promis ing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For natural language applications, process ing unknown words is one of the most important problems. It is almost impossible to include all words in the system's lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the area of written language process ing, some methods for handling unknown words have been proposed. For example, Tomita ( 1986) shows that unknown words can be han dled by the generalized LR parsing framework. In generalized LR parsing, it is easy to handle multi-part-of-speech words, and an unknown word can be handled by considering it as a spe cial multi-part-of-speech word. U nfortunat'ely, in the area of continu ous speech recognition, there has been little \u2022 progress in unknown word processing. Un like written language processing, in continu ous speech recognition, word boundaries are not clear and the correct input is not known, so the problem is more difficult. Recently, Asadi et al. (1990) proposed a method for automatically de tecting new words in a speech input. In their method, an explicit model of new words is used to recognize the existence of new words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 132, |
|
"text": "Tomita ( 1986)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 692, |
|
"end": 711, |
|
"text": "Asadi et al. (1990)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This pap er proposes a continuous speech recognition method which accepts any utter ance that might include unknown\u2022 words. In our approach, the HMM-LR continuous speech recognition system for Japanese (Kita et al . 1989a ; Kita et al. 1989b; Hanazawa et al. 1990 ) is used as the baseline system, and is an integration of Hidden Markov Mo dels (HMM) (Levinson et al. 1983 ) and g' eneralized LR pars ing (Tomita 1986 ). The HMM-LR system is a syntax-directed continuous speech recognition system. The system outputs sentences that the grammar can accept.", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 221, |
|
"text": "(Kita et al . 1989a", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 242, |
|
"text": "Kita et al. 1989b;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 263, |
|
"text": "Hanazawa et al. 1990", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 351, |
|
"end": 372, |
|
"text": "(Levinson et al. 1983", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 417, |
|
"text": "(Tomita 1986", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Hidden Markov Model is a stochas tic approach for modeling speech, and has been used widely for speech recognition. It is suit able for handling the uncertainty. that arises in speech, for example, contextual effects, speaker variabilities, etc. Moreover, if the HMM unit is a phone, then any word models can be com posed of phone models. Thus, it is easy to construct a large vocabulary speech recognition system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In our approach, two kinds of grammars are used. The first grammar is a normal gram mar which describes ou r task . The lexicon for the task is embedded in this grammar as phone sequences. The second grammar describes the Japanese phonemic structure, in which con straints between phones are written. These two grammars are merged and used in the HMM-LR system. The HMM-LR system outputs words in the lexicon if no unknown word is included in a speech input. If an unknown word is in cluded, then the system outputs a phonemic transcription that corresponds to the unknown word. However, the second grammar by itself is too weak to get correct phonemic transcrip tions. We strengthened the grammar by adding other linguistic information, the trigram model based on Japanese syllables. A trigram model is an extremely rough approximation of a lan guage, but it is very practical and useful. By adding the trigram model of syllables, the per formance of the system is improved drastically.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 ----------------------------, I \u2022. I : \ufffdt-\u2022 1111W\u2022d._ Input speech ! I I \ufffd----------------------------\ufffd", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "INTRO DUCTION", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "First, we will review the baseline system, the HMM-LR continuous speech recognition system ( Figure 1 ). This system is an integra tion of the phone-based HMM and generalized LR parsing.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 93, |
|
"end": 101, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In HMM-LR, the LR parser is used as a language source model for symbol predic- tion/ generation. Thus, we will hereaft er call the LR parser the predictive LR p a rs er. A phone-based predictive LR parser predicts next phones at each generation step and gener ates many possible sentences as phone se quences. The predictive LR parser determines next phones using the LR parsing table of the specified grammar and splits the parsing stack not only for grammatical ambiguity but also for phone variation. Because the predictive LR parser uses context-free rules whose terminal symbols are phone names, the phonetic lexi con for the specified task is embedded in the grammar. An example of context-free gram-mar rules with a phonetic lexicon is shown in Figure 2 . Rule ( 5) indicates the definite arti cle \"the\" before consonants, while rule ( 6) in dicates the \"the\" before vowels. Rules 7, (8),", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 752, |
|
"end": 760, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "(1) s --+ NP VP (2) NP --+ DET N (3) VP --+ V (4) VP --+ VNP (5) DET --+ / z / / a / (6) DET --+ / z / /i/ (7) N --+ / m / / a e/ / n/ (8) N --+ / ae/ / p/ / a/ / I/ (9) V --+ /iy/ /ts/ (10) V --+ /s/ /ih/ / ng/ /s/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "and (10) indicate the words \"man\" , \"apple\" , \"eats\" and \"sings\" , respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The actual recognition process is as fol lows. First, the parser picks up all phones pre dicted by the initial state of the LR parsing table and invokes the HMM models to verify the existence of these predicted phones. The parser then proceeds to the next state in the LR parsing table. During this process, all pos sible partial parses are constructed in parallel. The HMM phone verifier receives a probabil ity array which includes end point candidates and their probabilities, and updates it using an HMM probability calculation. This prob ability array is attached to each partial parse. When the highest probability in the array is un der a certain threshold level, the partial parse is pruned. The recognition process proceeds in this way until the entire speech input is pro cessed. In this case, if the best probability point reaches the end of the speech data, parsing ends successfully.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "High recognition performance is attained by driving HMMs directly without any inter vening structures such as a phone lattice. A more detailed algorithm is presented in (Kita et al. 1989a; Kita et al. 1989b ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 188, |
|
"text": "(Kita et al. 1989a;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 189, |
|
"end": 206, |
|
"text": "Kita et al. 1989b", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM-LR CONTINUOUS SPEECH RECOGNITION SYSTEM", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Language models such as context-free grammars or finite state grammars are effective in reducing the search space of a speech recog niton system. These models, however, ignore the stochastic characteristics of a language. By introducing stochastic language models, we can assign the a priori probabilities to word/phone sequences. These probabilities, together with acoustic probabilities, determine most likely recognition candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "STOCHASTIC LANGUAGE MODELING", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Having observed acoustic data y, a speech recognizer must decide a word sequence w that satisfies the following condition: P(wly) = max P(wly) w By Bayes' rule, P( I ) = P(ylw)P (w) w y P(y)", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 181, |
|
"text": "(w)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "STOCHASTIC LANGUAGE MODELING", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since P(y) does not depend on w, maxi mizing P(wly) is equivalent to maximizing P(ylw)P(w) . P(w) is the a priori probability that the word sequence w will be uttered, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "STOCHASTIC LANGUAGE MODELING", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "is estimated by the language model. P(ylw) is estimated by the acoustic model. Note that we are using HMM as an acoustic model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "STOCHASTIC LANGUAGE MODELING", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Word bigram/trigram models are exten sively used to correct recognition errors and im prove recognition accuracy (Shikano 1987 ; Pae seler and Ney 1989) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 126, |
|
"text": "(Shikano 1987", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TRIGRAM MODEL OF SYLLABLES", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The general idea of a trigram model can be easily applied to Japanese syllables. A typi cal syllable in Japanese is in the form of a CV, namely one consonant followed by one vowel, and the number of syllables is very small ( about one hundred) . Moreover, Japanese syllables seem to have a special stochastic structure. (Jelinek and Mercer 1980) . Given a collection of training data, the interpolation weights are estimated as follows ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 345, |
|
"text": "(Jelinek and Mercer 1980)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "TRIGRAM MODEL OF SYLLABLES", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "L i qi = l holds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Make an initial guess of qi . that", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "j-th data is removed from _ the ttaining data. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Calculate i-gram probabilities ff when the", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The HMM-LR system is a syntax-directed continuous speech recognition system. If we use a grammar which describes the Japanese phone mic structure, we can then constr\"uct a phonetic typewriter for Japanese. This grammar includes rules like \"a sequence of consonants doesn't ap pear\" or \"the syllabic nasal /N/ doesn't appear at the head of a word\" . This grammar does not include phonemic spellings for each word, so this grammar is suitable for transcribing an unknown word as a phone sequence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GRAMMAR FOR JAPANESE PHONEMIC STRUCTURE", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "However, because the perple\ufffdity 1 of this grammar is quite large, the trigram model of Japanese syllables is used at the same time. By adding the trigram model of syllables, the per plexity of the grammar is reduced from 18.3 to 4.3 (Kawabata et al . 1990 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 255, |
|
"text": "(Kawabata et al . 1990", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GRAMMAR FOR JAPANESE PHONEMIC STRUCTURE", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "For processing unknown words, two kinds of grammars are used. The first grammar is a normal grammar which describes our task. The phonemic spellings for each word are also in cluded in this grammar. The second grammar is a grammar for Japanese phonemic structure, mentioned in the previous subsection. Here after, these two grammars are referred to as the task grammar and the phonemic grammar, re spectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "UNKNOWN WORD PROCESSING", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "These two grammars are merged and used in the HMM-LR system. When merging two grammars, the start symbol of the phonemic grammar is replaced with pre-terminal names that might include unknown words (in our ex periments, proper-noun is allowed to include unknown words) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "UNKNOWN WORD PROCESSING", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "If a speech_ input . . includes an unknown word, then a segment of speech input does not match well with any word in the system's lex icon. In this case, the grammar for phone mic structure produces the phone sequence that matches well with the unknown word. If the speech input includes no unknown word, then the HMM-LR system outputs words in the lex icon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "UNKNOWN WORD PROCESSING", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The HMM-LR continuous speech recogni tion system uses the beam-search technique to reduce the search space. A group of likely recog nition candidates are selected using the likeli hood of each candidate. The likelihood S is calculated as follows. s = (1 ->.)s(HMM) + >.s (SYLLABLE) s(HMM ) and s(SY LLABLE) are the log like lihoods based on the HMM and the trigram model of syllables, respectively. The scaling pa rameter >. is introdued to adjust the scaling of the two kinds of likelihoods, as determined by preliminary experiments.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 281, |
|
"text": "(SYLLABLE)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RECOGNITIO N LIKELIHOOD", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "At the end of recognition, the likelihood of recognition candidates that include unknown words are penalized a small value to reduce the false alarms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RECOGNITIO N LIKELIHOOD", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "HMMs used in the experiments are basi cally the same as reported in (Hanazawa et al. 1990 ). _HMM phone \u2022 models \u2022based on the dis crete HMM are used as phone verifiers. A three loop model for consonants and a one-loop model for vowels are tr. ained using each phone data extracted from the ATR isolated word database (Kuwahara et al. 1989 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 89, |
|
"text": "(Hanazawa et al. 1990", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 339, |
|
"text": "(Kuwahara et al. 1989", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM PHONE MODELS", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Duration control techniques and separate vector quantization are used to achieve accurate phone recognition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "HMM PHONE MODELS", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The experiments were carried out using 25 sentences including 279 phrases uttered by one male speaker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SPEECH DATA", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The speech is sampled at 12kHz, pre emphasized with a filter whose transform func tion is (l-0.97z -1 ), and windowed using a 256point Hamming window every 9 msec. Then, 12-order LPC analysis is carried out. Spectrum, difference cepstrum coefficients, and power are computed. Multiple VQ codebooks for each fea ture were generated using 216 phonetically bal anced words. Hard vector quantization without the fuzzy VQ was performed for HMM training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SPEECH DATA", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Fuzzy vector quantization (fuzziness = 1.6) was used for test data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SPEECH DATA", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Syllable trigrams were estimated using a large number of training_ texts extracted from the ATR dialogue database (Ehara et aL 1990 ) . This database conta111s not only raw texts but also various kinds of syntactic/semantic infor mation, such as parts of speech, pronouncia tion and conjugational patterns, etc. The train ing texts includes approximately 73,000 phrases . and 300,000 syllables.", |
|
"cite_spans": [ |
|
{ |
|
"start": 114, |
|
"end": 131, |
|
"text": "(Ehara et aL 1990", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LINGUISTIC DATA", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "As stated earlier, the task grammar and the phonemic grammar are merged into one grammar and used in the \u2022 HMM-LR system. The task grammar describes the domain of an International Conference Secretarial Service and has 1,461 rules including 1,035 words. Of course, all the words which appea\ufffd in the test data are included in this grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "GRAMMARS", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "To evaluate the unknown word processing method, all proper nouns (8 words), such as a person's name and a place name, were removed from the task grammar. Table 1 shows the transcription rates for phrases that include unknown words. Here the transcription rate is equal to phone accuracy (Lee 1989) , which can be calculated as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 297, |
|
"text": "(Lee 1989)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 161, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "GRAMMARS", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "totalsubinsdel phone accuracy = l", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "x 100 tota where total indicates the total number of phones in test data, and sub, ins and del are the number Table 2 shows examples of recognition re sults that include unknown words. By using the trigram model of Japanese syllables, the system can output very close phonemic transcriptions for unknown words. Table 3 shows the phrase recognition rates for two kinds of grammars, the task grammar and a merged grammar consisting of the task grammar and the phonemic grammar . These grammars are both enhanced with the trigram model of syllables. By adding the phonemic grammar, the phrase recognition rate dropped from 87.5% to 81.7%. This is because the phonemic grammar sometimes causes a word to be recognized as a phone sequence despite the word being in the lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 318, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "RESULTS", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "In this paper, we described a continuous recognition method that can process unknown 141 81.7% 86 .4% 87.5%", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSION", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "words. The key idea is merging a task gram mar and a phonemic grammar. If no unknown word is included in the speech, then the system uses the task grammar and outputs a correct re sult. If an unknown w o rd is included, then the system uses the phonemic grammar and out puts a phonemic transcription for _ the unknown word. We also showed that the trigram mo<;).el of Japanese syllables is very effective in getting phonemic transcriptions for unknown words. This is our first approach. There are many problems that must be resolved. Further devel opment to improve the system is currently in progress.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "CONCLUSION", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Perplexity is a measurement of language model quality. It represents the average branching of the lan guage model. In general, as perplexity increases, speech recognition accuracy decreases . For more details, see(Jelinek 1990).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors are deeply grateful to Dr. Kurematsu, the president of ATR interpret ing Telephony Research Laboratories, all the members of the Speech Processing Department and the Knowledge and Data Base Department for their constant help and encouragement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "ACKNOWLEDGMENTS", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Effect of Reducing Ambiguity of Recognition Candidates in Japanese Bun setsu Units by 2nd-Order Markov Model of Syllables", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Araki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Murakami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ikehara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Tra nsactions of Information Processing Society of Japan", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Araki, T.; Murakami, J .; and Ikehara, S. 1989 Effect of Reducing Ambiguity of Recognition Candidates in Japanese Bun setsu Units by 2nd-Order Markov Model of Syllables. Tra nsactions of Information Processing Society of Japan. Vol. 30, No. 4 ( in Japanese).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic Detection of New Words in a Large Vocabulary Contin uous Speech Recognition System. Proceed ings of the 1990 International Co nference on Acoustics", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Asadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Makhoul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asadi, A.; Schwartz, R. S.; and Makhoul, J. 1990 Automatic Detection of New Words in a Large Vocabulary Contin uous Speech Recognition System. Proceed ings of the 1990 International Co nference on Acoustics, Speech, and Signal Process ing.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "ATR Dialogue Database", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Ehara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Ogura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Morimoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the International Conference on Sp oken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ehara, T.; Ogura, K.; and Morimoto, T. 1990 ATR Dialogue Database. Proceedings of the International Conference on Sp oken Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "ATR HMM-LR Continuous Speech Recognition System", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hanazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Nakamura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawabata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Shikano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 1990 Interna tional Conference on Acoustics, Speech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanazawa, T.; Kita, K.; Nakamura, S.; Kawabata, T.; and Shikano, K. 1990 ATR HMM-LR Continuous Speech Recognition System. Proceedings of the 1990 Interna tional Conference on Acoustics, Speech, and Signal Processing. Also In: Waibel, A. and Lee, K. F. (eds.) Rea dings in Sp eech Recognition. Morgan Kaufmann Publish ers.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Interpo lated Estimation of Markov Source Param eters from Sparse Data", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "Pattern Recogni tion in Pra ctice", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, F. and Mercer, R. L. 1980 Interpo lated Estimation of Markov Source Param eters from Sparse Data. In: Gelsema, E. S. and Kanal, L. N. (eds.) Pattern Recogni tion in Pra ctice. North Holland.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Self-Organized Language Modeling for Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Jelinek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Readings in Speech Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jelinek, F. 1990 Self-Organized Language Modeling for Speech Recognition, In: Waibel, A. and Lee, K. F. (eds.) Readings in Speech Recognition. Morgan Kaufmann Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "HMM Phone Recognition Using Syllable Trigrams", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawabata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hanazawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Itoh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Shikano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "IEICE Te chnical Report", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--110", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kawabata, T.; Hanazawa, T.; Itoh, K.; and Shikano, K. 1990 HMM Phone Recognition Using Syllable Trigrams. IEICE Te chnical Report. SP89-110 (in Japanese).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "HMM Continuous Speech Recogni tion Using Predictive LR Parsing. Proceed ings of the 1989 Intern ational Co nference on Acoustics", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawabata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Saito", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kita, K.; Kawabata, T.; and Saito, H. 1989a HMM Continuous Speech Recogni tion Using Predictive LR Parsing. Proceed ings of the 1989 Intern ational Co nference on Acoustics, Sp eech, and Signal Process zng.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Parsing Continuous Speech by HMM-LR Method. First International Workshop on Parsing Technologies", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawabata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Saito", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kita, K.; Kawabata, T.; and Saito, H. 1989b Parsing Continuous Speech by HMM-LR Method. First International Workshop on Parsing Technologies.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "HMM Continuous Speech Recogni tion Using Stochastic Language Models", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kawabata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Hanazawa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 1990 International Con fe re nce on Acoustics, Sp eech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kita, K.; Kawabata, T.; and Hanazawa, T. 1990 HMM Continuous Speech Recogni tion Using Stochastic Language Models. Proceedings of the 1990 International Con fe re nce on Acoustics, Sp eech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Construction of a Large Scale Japanese Speech Database and its Management System", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Kuwahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Takeda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Sagisaka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Katagiri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Morikawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 1989 International Conference on Acous tics, Sp eech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kuwahara, H.; Takeda, K.; Sagisaka, Y.; Katagiri, S.; Morikawa, S.; and Wat an ab e, T. 1989 Construction of a Large Scale Japanese Speech Database and its Management System. Proceedings of the 1989 International Conference on Acous tics, Sp eech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Automatic Speech Recog nition: The Development of the SPHINX System", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, K. F. 1989 Automatic Speech Recog nition: The Development of the SPHINX System. Kluwer Academic Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "An Introduction to the Application of the Theory of Probabilis tic Functions of a Markov Process to Au tomatic Speech Recognition", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Levinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rabiner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Sondhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1983, |
|
"venue": "Bell System Technical Journal", |
|
"volume": "62", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levinson, S. E.; Rabiner, L. R.; and Sondhi, M. M. 1983 An Introduction to the Application of the Theory of Probabilis tic Functions of a Markov Process to Au tomatic Speech Recognition. Bell System Technical Journal. Vol. 62, No. 4.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Continuous Speech Recognition Using a Stochastic Language Model", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Paeseler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 1989 International Co nference on Acoustics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Paeseler, A. and Ney, H. 1989 Continuous Speech Recognition Using a Stochastic Language Model. Proceedings of the 1989 International Co nference on Acoustics, Sp eech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Improvement of Word Recognition Results by Trigram Model", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Shikano", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Proceedings of the 1987 International Con fe re nce on Acoustics, Sp eech, and Signal Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shikano, K. 1987 Improvement of Word Recognition Results by Trigram Model. Proceedings of the 1987 International Con fe re nce on Acoustics, Sp eech, and Signal Processing.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Effi cient Parsing fo r Nat ural Language: A Fast Algorithm fo r Pra c tical Systems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tomita", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1986, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomita, M. 1986 Effi cient Parsing fo r Nat ural Language: A Fast Algorithm fo r Pra c tical Systems. Kluwer Academic Publish ers.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Schematic diagram of HMM-LR speech recognition system", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "An example of a grammar with phonetic lexicon", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"text": "Araki et al . (1989) suggest that a statistical method based on Japanese syllable sequences is effective for ambiguity resolution in speech recognition systems. Thus, a syllable trigram model is effective for recognizing Japanese syl lable sequences.In our syllable trigram model, the a pri ori probability P(S) that the syllable sequence S = s 1 , s2, ... , Sn will be uttered is calculated as follows(Kita et al . 1990) .P(s1 , . .. ,sn ) = n k=3 P(#lsn-1,sn ) P(sk I Sk-2 , Sk-i ) = qi f(sk I Sk-2 , Sk-i ) + q2 f(sk I Sk-1 ) + qa f(sk) + q4 C In the above expressions, \"#\" indicates the phrase boundary marker, and C is a uniform probability that each syllable will occur. The function . N counts the number of occurrences of its arguments in the training \ufffdata. The op timal interpolation weights qi are determined using deleted interpolation", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"text": "whe,re N is the number of syllables in train ing data, and 4. Replace qi \ufffd i\ufffdh iii and repeat from step 2.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>66.1%</td><td>1</td><td>95.3%</td><td>1</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Transcription rates for phrases that include unknown words Without syllable trigrams ( With syllable trigrams I" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Correct higashiku ichitaroudesu</td><td>Input higashiku (place name) Meaning (I am) Ichitarou</td><td colspan=\"2\">Results syllable trigrams Without sylla ble trigrams With shigashiku higashiku ishitaoouutsusu ishitarou desu</td></tr><tr><td colspan=\"4\">takarasamadesune (You are) Mr. Takara (aren't you) takaasabautsunu takarasamadesune</td></tr><tr><td>kyoutoekikara</td><td>from Kyoto station</td><td>hyotorekitaafu</td><td>kyoutoekikara</td></tr><tr><td>kitaooj iekimade</td><td>to Kitaooji station</td><td colspan=\"2\">shitaouziekimare kitaoojiekimade</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Examples of recognition results that include unknown words" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>1</td><td/><td>87.5%</td></tr><tr><td>2 3</td><td>. -</td><td>93.5% 94.6%</td></tr><tr><td colspan=\"3\">of phones recognized as incorrect, deleted and</td></tr><tr><td>inserted, respectively.</td><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"text": "Phrase recognition rates (with syllable trigrams)" |
|
} |
|
} |
|
} |
|
} |