Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C96-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:51:55.686031Z"
},
"title": "N-th Order Ergodie Multigram HMM for Modeling of Languages without Marked Word Boundaries",
"authors": [
{
"first": "Hubert Hin-Cheung",
"middle": [],
"last": "Law",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of IIong Kong",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "I,;rgodie IIMMs have been successfully used for modeling sentence production. llowever for some oriental languages such as Chinese, a word can consist of multiple characters without word boundary markers between adjacent words in a sentence. This makes wordsegmentation on the training and testing data necessary before ergodic ItMM can be applied as the langnage model. This paper introduces the N-th order Ergodic Mnltigram HMM for language modeling of such languages. Each state of the IIMM can generate a variable number of characters corresponding to one word. The model can be trained without wordsegmented and tagged corpus, and both segmentation and tagging are trained in one single model. Results on its applicw Lion on a Chinese corpus are reported.",
"pdf_parse": {
"paper_id": "C96-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "I,;rgodie IIMMs have been successfully used for modeling sentence production. llowever for some oriental languages such as Chinese, a word can consist of multiple characters without word boundary markers between adjacent words in a sentence. This makes wordsegmentation on the training and testing data necessary before ergodic ItMM can be applied as the langnage model. This paper introduces the N-th order Ergodic Mnltigram HMM for language modeling of such languages. Each state of the IIMM can generate a variable number of characters corresponding to one word. The model can be trained without wordsegmented and tagged corpus, and both segmentation and tagging are trained in one single model. Results on its applicw Lion on a Chinese corpus are reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Statistical language modeling offers advantages including minimal domain specific knowledge and hand-written rules, trainability and scalability given a language corpus. Language models, such as N-gram class models (Brown et al., 1992) and Ergodic Hidden Markov Models (Kuhn el, al., 1994) were proposed and used in applications such as syntactic class (POS) tagging for English (Cutting et al., 1992) , clustering and scoring of recognizer sentence hypotheses.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF0"
},
{
"start": 269,
"end": 289,
"text": "(Kuhn el, al., 1994)",
"ref_id": null
},
{
"start": 379,
"end": 401,
"text": "(Cutting et al., 1992)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "IIowever, in Chinese and many other oriental languages, there are no boundary markers, such as space, between words. Therefore preprocessors have to be used to perform word segmentation in order to identify individual words before applying these word-based language models. As a result current approaches to modeling these languages are separated into two seperated processes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Word segmentation is by no means a trivial process, since ambiguity often exists. Pot proper segmentation of a sentence, some linguistic information of the sentence should be used. iIowever, commonly used heuristics or statistical based approaches, such as maximal matching, fl'equency counts or mutual information statistics, have to perform the segmentation without knowledge such as the resulting word categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "To reduce the impact of erroneous segmentation on the subsequent language model, (Chang and Chan, 1993) used an N-best segmentation interface between them. llowever, since this is still a two stage model, the parameters of the whole model cannot be optimized together, and an Nbest interface is inadequate for processing outputs from recognizers which can be highly ambiguous.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Chang and Chan, 1993)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "A better approach :is to keep all possible segmentations in a lattice form, score the lattice with a language model, and finally retrieve the best candidate by dynamic programming or some searching algorithms. N-gram models arc usually used for scoring (Gu et al., 1991 ) (Nagata, 1994 , but their training requires the sentences of the corpus to be manuMly segmented, and even class-tagged if class-based N-gram is used, as in (Nagata, 1994) .",
"cite_spans": [
{
"start": 253,
"end": 269,
"text": "(Gu et al., 1991",
"ref_id": "BIBREF5"
},
{
"start": 270,
"end": 285,
"text": ") (Nagata, 1994",
"ref_id": null
},
{
"start": 428,
"end": 442,
"text": "(Nagata, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "A language model which considers segmentation ambiguities and integrates this with a Ngram model, and able to be trained and tested on a raw, unsegmented and untagged corpus, is highly desirable for processing languages without marked word boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1"
},
{
"text": "Based on the Hidden Markov Model, the Ergodic Multigram llidden Markov Model (l,aw and Chan, 1996) , when applied as a language model, can process directly on unsegmented input corpus as it allows a variable mmfl)er of characters in each word class. Other than that its prol)erties are sin> liar to l';rgodic tlidden Markov Models (Kuhn ct al., 1994) , that both training and scoring can be done directly on a raw, unCagged corpus, given a lexicon with word classes. Specifically, the N-Oh order F, rgodic Multigram It M M, as in conventional class-based (N+I)-gram model, assumes a (loubly stochastic process in sentence production. The word-class sequence in a scalene(: follows Che N-Oh order Markov assulnl> tion, i.e. tile identity of a (:lass in the s('.lite[Ic(~ delmn(Is only on tim previous N classes, and the word observed depelads only on the class it l)elongs to. The difference is thai, this is a multigram model (Doligne and Bimbot, 1995) in the sense Chat each state (i.e. node in the IIMM) (:a,t genera.re a wu-iable number of ot)served character sequences. Sentence boundaries are inodelcd as a sl)ecial class.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(l,aw and Chan, 1996)",
"ref_id": null
},
{
"start": 331,
"end": 350,
"text": "(Kuhn ct al., 1994)",
"ref_id": null
},
{
"start": 926,
"end": 952,
"text": "(Doligne and Bimbot, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2.1"
},
{
"text": "This model can be apl/lied to a.ll input sent(race or a characCer lattice as a language model. 'Fhe maxinnun likelihood scat(: sequence through l,he model, obtaine(t using the ViCerl)i or Stack I)(> coding AlgoriChln, ret)resenCs the 1)est particular segmentation and class-tagging for the input sentence or lattice, since transition of states denotes a wor(t boundary and state identity denotes tile ClU'rent word class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "2.1"
},
{
"text": "Le.xi('on A lexicon (CK] P, 1993) of 78,322 words, each con~ tainiug up to 10 characters, is awdlabh~ for use ill this work. l'ractically all characters have an cnCl:y ill the lexicon, so Chat out-of-vocalmlary words are modeled as indivi(hlal eharacters. There is a total of 192 syntactic classes, arranged in a hierarchical way. For example, the month names arc deuoted by the class Ndabc, where lg denotes Nouu, Nd denotes 'lbmporal Nouns, Igda ['or 'l'im(~ lmmes and Ndab for reusabh' tilne names. '['here~ is a total of 8 major categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.2",
"sec_num": null
},
{
"text": "Each word ill the dictionary is aullol,al.cd with one or nlore syntactic tags, tel)resenting dilferent syntactic classes Che word cnn possibly belong to. Also, a frequ(mcy count tbr each word, base(l on a certain corpus, is given, bill without inforniation on its distribution over different syntactic classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.2",
"sec_num": null
},
{
"text": "T(:rminoh)gy I,el, )42 be the set of all Chinese words in l, hc lexicon. A word \"wk C W is made up of one or more characters, l,et ,s~ r = (.~;I, .';'21....\";T) denote, a sentence as a T-character sequence. A funcCion (5~,, is defined such Chat (Sw (~Vk, sit +r-I ) is ] if w,. is a r-character word st ... st+,,-1, and 0 otherwise. 1 1,et /2 be the Ul)per bound of r, i.e. t,11o maxinntm uumber of characters ill a word (10 ill this paper). I,et (2/ = {cl...cL} be the set, of syntactic classes, where L is the nmnber of syntactic (:lasses in the lexicon (192 in our case). Lot t? C W \u00d7 (/ denote Che relaCion for all syntactic classitications of the. lexicon, such ChaC ('tot:, el) @ C ill' cl is one of the syntactic classes tbr 'wk. Each word wk llltlSt belong to one or more of the classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.3",
"sec_num": null
},
{
"text": "A path Chrough the model represents a particular segnmnCation and (:lass Lagging for the Sell--I,(~IIC('.. I,et \u00a37 = ('wt, (:It ; \u2022 . . ; \"Wig, Cl K ) t)e a particular segmentation and (;lass tagging for the sentence s~', where Wk is the kth word and elk dCllOtCS tllc (;lass assigned to w,:, as illustrated below. l\"(,r C Co be proper, I1' 2,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.3",
"sec_num": null
},
{
"text": ".,~_, ) 1 aml (wk,cl~) C l' must be saCistied, where t0 = 1, tic = 7'+ 1 and tk-j < l,, for 1 < k < K.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.3",
"sec_num": null
},
{
"text": "ItMM S|:a|;es for l;.he N-th order model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "In Che tirst order IIMM (class 1)it(am) lnodel, each I1MM state corresl)onds directly to the word-class of a word. lhlt in general, for an N-Oh order IIMM model, siuce each class depends on N previous classes, each state has to rel)lJesellt C]I(t COlil])illa- where tie iS the current word (:lass, ci, is the previ-()us word class, etc. '['here is a CeCal of L N states, which may nleall too many l)aranl('ters (l/v+l possible state transitions, each state can transit to L other states) for the model if N is anything greater th an ont.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "'1'o solve this l)rol)lem, a reasonal)le aSSllllllilion can })c luade that the d('taih'xl (;lass idea titles of a mor(~ (listanl, word have, in general, less influence than the closer ones Co the current word class. Thus instead of using C as tim classitication relation for all l)revious words, a set of I~I'he ;algorithm to bc described ;tSSUlnCs tlt~Lt, th(,.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "(:ha.r;tctcr identities arc known for the S(!lltCttC(~ 8; ?, })It(, it can *also be al)plicd when ca.ch charttctcr position sL becomes a. set of possible (:h~u'a(:ter (:~Lndida.t, es by simply letting &,,(wk,sl +''-I) --i for all words wk which can be constructed from the c]mr~t(:ter positions st...st+, 1 of the input c]mractcr lattice. This enal)les the mo(M to 1)e used as the languzLgc model component for r(!(:ognizcrs and for decoding phoncti(: input. classification relations {C(\u00b0), C(1),...C (N-l) } can be used, where C(\u00b0) = C represents the original, most detailed classification relation for the current word, and C (n) is the less detailed classification scheme for the nth previous word at each state. Thus the number of states reduces",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "to LQ ----L(\u00b0)L (1) ...L (N-l) in which L('0 _ < L.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "Each state is represented as Qi = (c~\u00b0o)... elN-_~ O) where C (n) = {cln)}, 1 < I < L (n) is the class tag set for the nth previous word.",
"cite_spans": [
{
"start": 44,
"end": 53,
"text": "elN-_~ O)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "However, if no constraints are imposed on the series of classification relations C Oo , the number of possible transitions may increase despite a decrease in the number of states, since state transitions may become possible between every two state, resulting in a total of L(\u00b0)2L (02 ... L (N-1)2 possible transitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "A constraint is imposed that, given that a word belongs to the class cl n) in the classification C (n),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "we can determine the corresponding word class c}, ~+0 the given word will belong to in C(~+1), and for every word there is no extra classifications in C (n+l) not corresponding to one in C (n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "Formally, there exist mapping functions 5 c('0 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "COO ~ C(\"+0, 0 _< n _< N-2, such that if C(n) ~(n+l)] ~ .~'(n) then ((wk, cl n)) 6 C (n)) => I ' '~1 ~ ) , (n+l), C(n+l)) (wk,c v ) 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "for all wk 6 W, and that y(n) is surjective. In particular, to model sentence boundaries, we allow $ to be a valid class tag for all C(n), and define 5e('~)($) = 2. This constraint is easily satisfied by using a hierarchical word-class scheme, such as the one in the CKIP lexicon or one generated by hierarchical word-clustering, so that the classification for more distant words (higher n in C (n)) uses a higher level, less detail tag set in the scheme. using Nth order Markov assumption and representing the class history as HMM states. $ denotes the sentence boundary, elk is $ for k _< 0, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.4",
"sec_num": null
},
{
"text": "Q~k re(\u00b0) c! N-l) ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Likelihood Formulation",
"sec_num": "2.5"
},
{
"text": "Note that Qlk can be de-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Likelihood Formulation",
"sec_num": "2.5"
},
{
"text": "I lk * \" ' ~k--N+l \"\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Likelihood Formulation",
"sec_num": "2.5"
},
{
"text": "termined from clk and Qlk-~ due to the constraint on the classification, and thus P(Qzk]Qlk_~) = P(ct~ IQl~-~).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Likelihood Formulation",
"sec_num": "2.5"
},
{
"text": "The likelihood of the sentence s T under the model is given by the sum of the likelihoods of its possible segmentations. Given the segmentation and class sequence \u00a3 of a sentence, the state sequence (Qz~ ... QI~) can be derived from the class sequence (eh...ci~.). Thus the observation probability of the sentence Given this tbrmulation the training procedure is mostly similar to that of the first order Ergodic Mnltigram HMM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Likelihood Formulation",
"sec_num": "2.5"
},
{
"text": "The forward variable is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "O't(i) = P(S1.-. St, QI(t)-\" Qi[ ~)N)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "where Q~(t) is the state of the [IMM when the word containing the character st as the last character is produced. for I <t <T--1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "As A, H arrays and the 5~, fimction are mostly 0s, considerable simplification can be done in irnph'.mentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "The likelihood of the sentence given the model can be evaluated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "LQ P(s'('lO N) = ~f_~.,r(i)aio i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "The Viterbi algo,'ithm [br this model can be ob tained by replacing the summations of the forward algorithm with maximizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Forward and Backward Procedure",
"sec_num": "3.2"
},
{
"text": "Re-estimation Algorithm &(i, j) is detined as the probability that given a sentence .s~' and the model (_)N, a word ends at the character st in the state Qi an(l tile next word starts at the character st+l in the state Qj. Thus ~t(i, j) can be expressed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R s,+,",
"eq_num": "(j)"
}
],
"section": "3.3",
"sec_num": null
},
{
"text": "r=l wkCW P(sY'leN)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "['or l < t < fl'--I 1 < i,j < LQ. turthermore dellne %(/) to be the probahility that, given Sl r and O N , a word ends at the character st in the state Qi. Thus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "ctt(i)/3,(i) for 1 <t <7',1 <i< LQ. 7,(i)-p(sy.l\u00aeN)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "Sulnlnation of (t (i, j) ()vet\" t gives tile expected number of times that state Qi transits to slate Qj in the sentence, aml stunmation of 7t(i) over t gives the expected number of state Qi occurring in it. Thtts the quotient of their summation over t gives aij, the new estimation for aij. aij --~_ [, ~'t(,,Y)/~_~ 7,(i) for 1 _< i,j .::( LQ t=l tin1",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "[, ~'t(,,Y)/~_~ 7,(i)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "The initial and fi,,a[ class probability estimates, a0j and ai0 can be re-estimated as follows. This represents the contribution of wk, occurring as the last word in sl, to ,~,(j). Also define 7't \u00b0~ (j) to be the I)robability that, given the sente.nce ,s'~\" and the model, we is observed to end at character st in the state Qj. (,~[~(j) ",
"cite_spans": [
{
"start": 329,
"end": 337,
"text": "(,~[~(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3.3",
"sec_num": null
},
{
"text": "A corpus of daily newspaper articles is divided into training and testing sets for the experiments, which is 21M and 4M in size respectively. Th(' first order (N=I) algorithms are applied to the training sets, and parameters obtained after different iterations are used for testing. The initial parameters of the HMM are set based on the frequency counts from the lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "The class-transition probability aij is initialized as the a priori probability of the state P(Qj), estimated fl'om the relative frequency counts of the lexicon, bj(wk) is initialized as the relative count of the word wk within the class corresponding to the current word class in Qj. Words belonging to multiple classes have their counts distributed equally among them. Smoothing is then applied by adding each word count by 0.5 and normalizing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "After training, the Viterbi algorithm is used to retrieve the best segmentation and tagging \u00a3* of each sentence of the test corpus, by tracing the best state sequence traversed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "The test-set perplexity, calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perplexity",
"sec_num": "4.2"
},
{
"text": "m'= exp(-M ]-- log(J'(Z', i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Perplexity",
"sec_num": "4.2"
},
{
"text": "where the summation is taken over all sentences s~ ' in the testing corpus, and M represents the number of characters in it, is used to measure the performance of the model. The results for models trained on training corpus subsets of various sizes, and after various iterations are shown (Table 1 ). It is obvious that with small training corpus, over-training occurs with more iterations. With more training data, the performance improves and over-training is not evident.",
"cite_spans": [],
"ref_spans": [
{
"start": 289,
"end": 297,
"text": "(Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Perplexity",
"sec_num": "4.2"
},
{
"text": "A further experiment is performed to use the models to decode phonetic inputs (Gu et el., 1991 Sentences from the testing corpus are first expanded into a lattice, formed by generating all the common homophones of each Chinese character. Tested on 360K characters, a character recognition rate of 91.24:% is obtained for the model trained after 8 iterations with 21M of training text. The results are satisfactory given that the test corpus contains many personal names and ()tit of vocabulary words, and the highly ambiguous nature of (;he problem.",
"cite_spans": [
{
"start": 78,
"end": 94,
"text": "(Gu et el., 1991",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phonetic Input Decoding",
"sec_num": "4.3"
},
{
"text": "In this paper the N-th order Ergodic Multigram IIMM is introduced, whose application enables integrated, iterative language model training on nntagged and unsegmented corpus in languages such as Chinese. The pertbrmanee on higher order models are expected to be better as the size of training corpus is relatively large. Itowever some form of smoothing may have to be applied when the training corpus size is small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "With some moditication this algorithm would work on phoneme candidate input instead of character candidate input. This is useful in decoding phonetic strings without character boundaries, such as in continuous Chinese~Japanese~Korean phonetic inpnt, or speech recognizers which output phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "This model also makes a wealth of techniqnes developed for HMM in the speech recognition field available for language modeling in these languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Class-Based ngram Models of Natural Language",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "P",
"middle": [
"V"
],
"last": "Desouza",
"suffix": ""
},
{
"first": ".",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Compulalional Linguistics",
"volume": "18",
"issue": "",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P.F., deSouza, P.V., Mercer, 11..L., Della Pietra, V.J., Lai, J.C. 1992. Class-Based n- gram Models of Natural Language. In Compu- lalional Linguistics, 18:467-479.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Study on Integrating Chinese Word Segmentation and l)~rt -of-Speech Tagging",
"authors": [
{
"first": "C",
"middle": [
"Ii"
],
"last": "Chang",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Chart",
"suffix": ""
}
],
"year": 1993,
"venue": "Comm. of COLIP",
"volume": "5",
"issue": "1",
"pages": "69--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang, C.II., Chart, C.1). 1993. A Study on Inte- grating Chinese Word Segmentation and l)~rt - of-Speech Tagging. In Comm. of COLIP,5', Vol 3, No. I, pp.69-77.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Chinese Knowledge lntbrmation Group 1!)!)3",
"authors": [],
"year": null,
"venue": "Technical Report No. 93-05. [nstitul.e of lnt'of mation Science",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chinese Knowledge lntbrmation Group 1!)!)3. In Technical Report No. 93-05. [nstitul.e of lnt'of mation Science, Academia Sinica, 'l'aiwan.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A PracticM I)ar|,-of-Sl)Cech Tagger",
"authors": [
{
"first": "K",
"middle": [],
"last": "Cutting",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kupic('",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sibun",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceeding,s of the Third Confercu.cc on Appli(:d Natural Language Procc.s,sin9",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cutting, K., Kupic(', J., l)cdcrs(:n, J., Sibun, P. 1992. A PracticM I)ar|,-of-Sl)Cech Tagger. In Proceeding,s of the Third Confercu.cc on Appli(:d Natural Language Procc.s,sin9, pp. 133-140.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "t9i)5, l,~mgu;~g('. Modeling by Vt~ritd)le Length S(;quenccs: Thcor('.tical Formul~ttion ~md Evahmtion of Multigrams",
"authors": [
{
"first": "S",
"middle": [],
"last": "I)clignc",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bimbot",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "169--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I)clignc, S., Bimbot, F. t9i)5, l,~mgu;~g('. Model- ing by Vt~ritd)le Length S(;quenccs: Thcor('.tical Formul~ttion ~md Evahmtion of Multigrams. In 1CAb'5'P 95, Pl). 169-172.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Markov Modeling of Mmldarin C'hincsc for decoding the phonc~ic sequence into Chinese ch;~r~cl.(ws",
"authors": [
{
"first": "Ii",
"middle": [
"Y"
],
"last": "Gu",
"suffix": ""
},
{
"first": "C",
"middle": [
"Y"
],
"last": "Tscng",
"suffix": ""
},
{
"first": ", I",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": ".",
"middle": [
"S"
],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "Uompuler ,5'pooch and Language",
"volume": "5",
"issue": "",
"pages": "363--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gu, II.Y., Tscng, C.Y., l,cc, I,.S. 1991. Markov Modeling of Mmldarin C'hincsc for decoding the phonc~ic sequence into Chinese ch;~r~cl.(ws. In Uompuler ,5'pooch and Language, Vol 5, pl).363- 377.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ergodic t/iddcn Markov Models trod Polygr~ms for I,anguage Modeling",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nicmann",
"suffix": ""
},
{
"first": "E",
"middle": [
"G"
],
"last": "Schukat \u2022 Tm~tmazz-Ini",
"suffix": ""
}
],
"year": 1994,
"venue": "ICA,gSP 94",
"volume": "",
"issue": "",
"pages": "357--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuhn, T., Nicmann, H., Schukat \u2022 TM~tmazz- ini, E.G. 1994. Ergodic t/iddcn Markov Mod- els trod Polygr~ms for I,anguage Modeling. In ICA,gSP 94, pp.357-360.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multigrotto IIMM Integrating Word Segmc'ntal, iou ;rod Class Tagging for (Jhinesc I,mlguagc Mode]ing",
"authors": [
{
"first": "[",
"middle": [],
"last": "L~tw",
"suffix": ""
},
{
"first": "",
"middle": [
"C"
],
"last": "Ii",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L~tw, t[.II.C., Chan, (3. 1996. Ergodi(\" Multi- grotto IIMM Integrating Word Segmc'ntal, iou ;rod Class Tagging for (Jhinesc I,mlguagc Mod- e]ing. 'Fo appear in 1CAHS'I ~ 95.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Stochastic ,]ap~mcs(~ Morphok)gical AnMyzcr Using ~ l,'orwa.rd-l)P B~L<;kwa.rd-A* N-Best Sear<:h Algorithm",
"authors": [
{
"first": "",
"middle": [],
"last": "Na",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gata",
"suffix": ""
}
],
"year": 1994,
"venue": "COL1NG 94, I)1)",
"volume": "",
"issue": "",
"pages": "201--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Na.gata, M. 1994. A Stochastic ,]ap~mcs(~ Morphok)gical AnMyzcr Using ~ l,'orwa.rd-l)P B~L<;kwa.rd-A* N-Best Sear<:h Algorithm. In COL1NG 94, I)1).201-207.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "t, ion of the classes of the most recelfl; N words, iuctlading the current, word. I,et Qi represent a stal,(~ of the N-th order Ergo(lit Multigraul IIMM. Thus Qi = ((%...ci~_,)",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "The above constraint ensures that given a state Q, : ,(c!\u00b0),o . cl :, 1)) it can only transit toQi = (c5~),br(\u00b0)(c~))''' J-(N-2)(c~N--~u)))where c~\u00b0 ) is any state in C (\u00b0). Thus reducing to the maximum number of possible transitions to L(\u00b0)2L0) ... L(N-1).",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "{\u00a3} be the set of all possible segmentations and class taggings of a sentence. Under the Nth order model (.)N, the likelihood of each valid segmentation and tagging 12 of the sentence s T, /~(8T, ~[oN), can be derived as follows. P(w,, c** ; w=, c~= ;... ; Wg, e~,,. IO N) = P(W 1 ]Cll )P(cl 1 I$N)P($MK... el.~_,,,+, ) \u00d7 K ([Ik:= P(W~]Clk )P(clk IC~*-1 \" \" \" elk_N)) = P(w~lc,,)P(O,,lSN)p($lO,K) \u00d7 K ([Ik=u P(w~lclk)P(Ql~ IQ,k-~))",
"num": null,
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"text": "As in conventional HMM, the Ergodic Multigram HMM consists of parameters E) N ~--{A, B}, in which A = {aij], 0 < i,j <_ LQ (Total number of states), denotes the set of state transition probabilities from Qi to Qi, i.e. P(Q31Qi). In particular, a0i = P(Qi[$ N) and ai0 = P($]Qi) denote the probabilities that the state Qi is the initial and final state in traversing the HMM, respectively, a00 is left undefined. H = {bj(w~)],where 1 < j < L Q, denotes the set of word observation probabilities of wk at the state Qj, i.e.P(wk]Qj).The B matrix, as shown above, models the probabilities that wk is observed given N most recent classes, and contains LQ[W] parameters (recall that LQ = L(\u00b0)L(1)... L(N-1)). Our ~assumption that wk only depends on the current class reduces the number of parameters to L(\u00b0)]W[ for the /3 matrix. Thus in the model, bj(wk) representingP(Wk[Qj) are tied together for all states Qj with the same current word-class, i.e. P(wklOj) = P(welc,) if 03 = (c,...). Also, aij is 0 if Qi cannot transit to Qj. As a resul~ the number of parameters in the A matrix is only L(\u00b0)LQ.",
"num": null,
"uris": null
},
"FIGREF7": {
"type_str": "figure",
"text": "c~.r(i)aio /~'Tt(i) To derive bj (w~:), first define ctt ~ (i) as the probability of the sentence prefix (sl \u2022 \u2022 . st) with 'wa, in state Qi as the last coml)lete word. Thus It 1,~ r=l i=l ( (): t--; ( i)aij bj ( w k )~w ('u)k , S tt--r + l ))",
"num": null,
"uris": null
},
"FIGREF8": {
"type_str": "figure",
"text": "fJt(j) 7~\"~(J) -p(8~'lO N)Let Qj o Qj, denot(;s the relation that both Qj and Qj, represent the s~me current word class.Thus summation of 71~k(j) ow:r t gives the e.xpetted munber of times that wk is observed in state Qj, and summation of 7t(J) over t gives the total expected number of occurrence of state Qj. Since states with the same current word class are tied together by our assumption, the required value of bj(wk) is given by E J' E~I ,./~ok(j,)",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "113.600 111.745 110.783 21M 116.376 11.1.275 109.282 108.1/12 Table 1: Test Set Perplexities of testing set after different iterations on subsets of training set This is not trivial since each Chinese syllable can correspond to up to 80 different characters.",
"num": null,
"html": null,
"content": "<table><tr><td>'Daining Size</td><td>2</td><td>d</td><td>6</td><td>8</td></tr><tr><td>98K</td><td colspan=\"4\">194.009 214.096 246.613 286.721</td></tr><tr><td>1.3M</td><td colspan=\"4\">126.084 122.304 121.606 121.776</td></tr><tr><td>6.3M</td><td>118.531</td><td/><td/><td/></tr><tr><td>).</td><td/><td/><td/><td/></tr></table>"
}
}
}
}