Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C96-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:52:09.344678Z"
},
"title": "Chinese Word Segmentation based on Maximum Matching and Word Binding Force",
"authors": [
{
"first": "Pak-Kwong",
"middle": [],
"last": "Wong",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chorkin",
"middle": [],
"last": "Chan",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "DeI)artment of Computer Scien(;(~ The Univ(;rsil;y of Itong Kong l)okfulam ih)a,d thmg Kong pkwong((~cs.hku.hk and (:chan\u00a2~cs.hku.hk",
"pdf_parse": {
"paper_id": "C96-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "DeI)artment of Computer Scien(;(~ The Univ(;rsil;y of Itong Kong l)okfulam ih)a,d thmg Kong pkwong((~cs.hku.hk and (:chan\u00a2~cs.hku.hk",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A language model as a t)ost-processor is esse, ntial to a recognizer of speech or characters in order to determine the approi)riate word se, que, n(:e and henc.e the semantics of an inI)ut line of text or utterance. It is well known that an N-gram statistics language model is just as effective as, t)ut nmch more eificient than, a syntactk:/semantic analyser in determining the correct word sequence. A necessary condition to successflfl collection of N-gram statistics is the existence of a coInprehensive le, xicon and a large text corpus. The latter must tie lexically analysed in order to identify all the words, from which, N-gram statistics can be derived. About 5,000 characters are being used in modern Chinese and they are the building blocks of all wor(ls. Ahnost every character is a word and inost words are of one or two characters long but there are also abundant wor(ls longer than two characters. Before it; is seginented into words, a line of text is just a sequence of characters and there are numerous word segmentation alternatives. Usu-ally, all but one of these alternatives arc syntactically and/or semantically incorrect. This is l;he case because unlike texts in English, Chinese texl;s have no word nlarkers. A tirst step towmds buihting a language model based on N-gram statistics is to de, vek)p an etIMent lexical analyser to id(!ntify all the words in the, corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Word segmentation algorithlns behmg to one of two types ill general, viz., the structural (Wang et al., 1991) and the statistical type (Lua, 1990)(Lua and Gan, 1994)(Sproat and Shih, 1990) rt;spectively. A structural algorithm resolves segmentation mnbiguities by examining the structural rclationships between words, while a statistical algo-rithm compares the usage flequencies of the words and their ordered combinations inste, ad. Both approaches ln~ve serious liinitat;ions.",
"cite_spans": [
{
"start": 90,
"end": 109,
"text": "(Wang et al., 1991)",
"ref_id": null
},
{
"start": 155,
"end": 176,
"text": "Gan, 1994)(Sproat and",
"ref_id": null
},
{
"start": 177,
"end": 188,
"text": "Shih, 1990)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Maximum matching (l,iu et al., /994) is one of the most I)opular structural segmentation algorithms for Chinese texts. This method favours long words an(1 is a gree(ty algorithm by (lesign, hen(:e, suboptimal. Segmenl;ation may start from either end of the line without any difference in segmentation results. In this paper, the forward direction is adopted. The major advantage of inaximum matching is its etHciency while its segmentation accuracy can be expected to lie around 95%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Matching Method for Segmentation",
"sec_num": null
},
{
"text": "In this statistical approach in terms of word frequencies, a lexicon needs not only a rich repertoire of word entries, lint also the usage frequency of e, ach word. To segment a line of text, each possible segmentation alternative is ewduated according to the product of the word fi'equencies of the words Seglnented. The word sequence, with the highest fi'equency product is accepted a.s correct. This method is simple but its a(:curacy (h,,lmnds heavily on the accuracy of the usage fi'equencies. \u2022 Any word longer than 4 characters will be divided into a 2-character prefix, a 2-character infix and a suffix. The prefix and tile infix are stored in the bin table for 2-character words, with clear indications of their status. Each prefix points to a linked list of associated infixes and each infix in turn, points to a linked list of associated suffixes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Method for Segmentation",
"sec_num": "3"
},
{
"text": "Maximum matching segmentation of a sequence of characters \"...abcdefghij.. 2' at the character \"a\" starts with matching \"ab\" against the 2-character words table. If no match is found, then, \"a\" is assumed a 1-character word and maximum matching moves on to \"b\". If a match is found, then, \"ab\" is investigated to see if it can be a prefix. If it cannot, then \"ab\" is a 2-character word and maximum matching moves on to \"c\". If it can, then one examines if it can be associated with an infix. If it can, then one examines if \"cd\" can be an infix associated with \"ab\". If the answer is negative, then the possibility of \"abed\" being a word is considered. If that fails again, then \"c\" in the table of 1-character words is examined to see if it can be a suffix. If it; can, then \"abe\" will be examined to see if can be a word by searching the 1-chara(q;er suffix linked list pointed at by \"ab\". Otherwise, one has to accept that \"ab\" is a 2-character word and moves on to start Inatching at \"c\". If \"cd\" can be an infix preceded by \"ab\", the linked list pointed at; by \"cd\" as an infix will be searched for the longest possible sutfix to combine with \"abed\" as its prefix. If no match can be found, then one has to give up \"cd\" as an infix to \"ab':.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Frequency Method for Segmentation",
"sec_num": "3"
},
{
"text": "Despite the fact thai; the lexicon acquired from Taiwan has been augmented with words fl'om another lexicon developed in China, when it is applied to segment 1.2 million chm'acter news passages in blocks of 10,000 characters each randomly selected over the text corpus, an average word seginentation error rate (IZ) of 2.51% was found with a standard deviation (c,) of 0.57%, mostly caused by uncommon words not included in the enriched lexicon. Then it is decided that the lexicon should be fllrther enriched with new words and adjusted word binding forces over a number of generations. In generation i, n new blocks of text are picked randomly from the corpus and words segmented using the lexicon enriched in the previous generation. This process will stop when I* levels off over several generations. The 100(1 -a)% confidence interval of t* in generation i is :tzto.a~,~,__l~r/v~ where a is the standard deviation of error rates in generation i-1, and n is the number of blocks to be segmented in generation i. to.5~,n-1 is the density function of (0.5a, n -1) degrees of freedom (Devore, 1991) . Throughout the experiments below, n is always chosen to be 20 so that the 90% confidence interval (i.e., (t = 0.1) of t z is about :k0.23%.",
"cite_spans": [
{
"start": 1085,
"end": 1099,
"text": "(Devore, 1991)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training of the System",
"sec_num": "7"
},
{
"text": "The lexicon has been updated over six generations after being applied to word segment 1.2 million characters. Tile vocabulary increases from 85855 words to 87326 words. The segmentation error rates over seven generations of the training process are shown in the Most of these errors occur in proper nouns not included in the lexicon. They are hard to avoid unless they become l)opular enough to be added to the lexicon. The CPU time used for segmenting a text; of 1,200,000 characters is 5.7 seconds on an IBM I{ISC System/6000 3BT computer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Lexical analysis is a basic process of analyzing and understanding a language. The proposed algorithm provides a highly accurate and highly efficient way for word segmentation of Chinese texts. Due to cultural differences, tile same language used in different geographical regions and difl'crent applications can be quite diffferent causing problems in lexical analysis. However, by introducing new words into and adjusting word binding threes in the lexicon, such difficulties can be greatly mitigated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "This word segmentor will be applied to word segment the entire corpus of 63 million characters before N-gram statistics will be collected for postprocessing recognizer outputs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Probability and Statistics for Engineering and Sciences",
"authors": [
{
"first": "Jay",
"middle": [
"L"
],
"last": "Devore",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "272--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay L. Devore. 1991. Probability and Statistics for Engineering and Sciences. Du:rbury Press, pages 272 276.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Word Segmentation Rules and Automatic Word Segmentation Methods for Chinese Information Processing (in Chinese)",
"authors": [
{
"first": "Ynan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun Xu",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 1994,
"venue": "Qing Hua University Press and Guang Xi Science and Tee]tnology Press",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ynan Liu, Qiang Tan, and Kun Xu Shen. 1994. The Word Segmentation Rules and Automatic Word Segmentation Methods for Chinese Infor- mation Processing (in Chinese). Qing Hua Uni- versity Press and Guang Xi Science and Tee]t- nology Press, page 36.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An Applicat;ion of ]nibrmal;ion Theory in (]hincso. Word Segmental:ion. Comp'ttter l'~'oce,ssi'lzg of Ch, incsc and Or'ienlal Languages",
"authors": [
{
"first": "Kim-Teng Lua Mid Kok-Wcc",
"middle": [],
"last": "Gan",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim-Teng Lua mid Kok-Wcc Gan. 1994. An Applicat;ion of ]nibrmal;ion Theory in (]hincso. Word Segmental:ion. Comp'ttter l'~'oce,,ssi'lzg of Ch, incsc and Or'ienlal Languages, Vol. 8, No. 1, pages 115 123, 2unc'.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "From Chm'aclx',r l;o Word An At)plication of hfformai;ion Theory",
"authors": [
{
"first": "K",
"middle": [
"T"
],
"last": "",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Proccssin 9 of Chinese and Oriental Languages",
"volume": "4",
"issue": "",
"pages": "304--313",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.T. lma. 1990. From Chm'aclx',r l;o Word An At)plication of hfformai;ion Theory. Computer Proccssin 9 of Chinese and Oriental Languages, Vol. 4, No. 4, pages 304 313, March.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Parsing Method for hh',ntit~ying Words in Mandarin Chines(, S('m~(,am(,~s",
"authors": [
{
"first": "Limtg-Jyh",
"middle": [],
"last": "Wm~g",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tzusheng L'ei",
"suffix": ""
},
{
"first": "I",
"middle": [
"A"
],
"last": "Wci-(]huan",
"suffix": ""
},
{
"first": "Lih-Ching",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "Conference on Artificial httelli gcncc, pages 1018 1.023~ l)~rrling IIarl)our, Sydney",
"volume": "",
"issue": "",
"pages": "24--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Limtg-Jyh Wm~g, Tzusheng l'ei, Wci-(]huan IA, and Lih-Ching 11,. Ilmmg. 1991. A Parsing Method for hh',ntit~ying Words in Mandarin Chi- nes(,, S('m~(,am(,~s. In l'roccssiugs of 121,h lnt(:> 'national ,loin/, Conference on Artificial httelli gcncc, pages 1018 1.023~ l)~rrling IIarl)our, Syd- ney, Austr~dia, 24-30 August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A St, adsdeal Method for Finding Word Boundaries in Chinese Text. Computer l'rocessin.q of Uhinese and Oriental Lo, n.q,tta.qes",
"authors": [
{
"first": "",
"middle": [],
"last": "L{ictmrd Sproat ~md Chilin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "4",
"issue": "",
"pages": "336--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "l{ictmrd Sproat ~md Chilin Shih. 1990. A St, ads- deal Method for Finding Word Boundaries in Chinese Text. Computer l'rocessin.q of Uhinese and Oriental Lo, n.q,tta.qes, Vol. 4, No. 4, pages 336 349, Mm'ch.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"content": "<table><tr><td>one tytm (t[ do(:umcnts to a.noth(n', say, a l)assag(:</td><td/><td/><td/></tr><tr><td>of world news as a,ga,hlsl; a, t(',(:hnical r(~,port. Sil~c('~</td><td/><td/><td/></tr><tr><td>(,here, a.r('. I;(uls o[ l,h(/llsa.Ii(ls o[ words a,cl;ively us('.(l,</td><td/><td/><td/></tr><tr><td>Oil(', nc(;ds a giganti(: (:oll(~(:ti(ll: ()f texts to mak(~</td><td/><td/><td/></tr><tr><td>;m a,(',(:urat,(~ estimal;(~, lint t)y t;h(~.u, the (~stimat;(~</td><td colspan=\"3\">In this l'(!Sl)(:(;l; , the pr()l&gt;()s(;(1 algoritlun is a, sta.tis-</td></tr><tr><td>is jusl, an averag(~ a,n(l it; ma.y not; t)(,, suital)le for</td><td colspan=\"3\">ti(:al apl)roach. It is as (,,tti(:i(:nt as tim maximum</td></tr><tr><td>any tyt)(! (/[ (h)(:mn(mt at all. /n oth(n' words, 1,}m</td><td colspan=\"3\">lna.tching moth(/(1 I)(~(;aus(! wor(l binding f()r(:(!s ;u'(~</td></tr><tr><td>variml(:(~ of ml(:h &amp;ll (~st;illl~tl;(~ is to() great making</td><td colspan=\"3\">utilized only in (,x(:(~pti(mal cases, th)w(wer, much</td></tr><tr><td>I;h(; (~stiirlat(! listless.</td><td colspan=\"3\">of the word amt/iguities are climilmt(~d, h~a(ling</td></tr><tr><td/><td colspan=\"3\">to a vc'ry high word identification accuracy. S('g-</td></tr><tr><td>4 The Lexicon</td><td colspan=\"3\">m(:ntation errors ass(/ciat('d with multi-cha.ract(,,r</td></tr><tr><td/><td colspan=\"3\">words can 11(: r(~(h:(:cd 1)y adding or (leh~ting woMs</td></tr><tr><td>Most Chines(; linguists ac(',(;1)t the (h',:linition of a</td><td colspan=\"3\">to or from the h',xi(:on as well as adjusting word</td></tr><tr><td>wor(1 as thc minimum unit tha,t is scmanticMly</td><td colspan=\"2\">t)in(ling forces.</td><td/></tr><tr><td>(',omt/h',t(~ and (',all lie, I)Ut; tog('%her as t/uihting</td><td/><td/><td/></tr><tr><td>t)lo('ks to form a, sent(ulc(u llow(:vex, in Chines(:,</td><td>6</td><td>Structure</td><td>of the Lexicon</td></tr><tr><td/><td colspan=\"3\">Words in the h:xi(-(in are divided into 5 groups</td></tr><tr><td/><td colspan=\"3\">a, ccording to woM h;ngths. They corr(:spond to</td></tr><tr><td/><td colspan=\"3\">words ()t' l, 2, 3, 4, and more than 4 cha,ra(&gt;</td></tr><tr><td/><td colspan=\"3\">ters with group sizes equal t() 7025, 53532, 12939,</td></tr><tr><td/><td colspan=\"3\">11269, and 1090 rt;stmctively. Since iilOSt of tlw,</td></tr><tr><td>t: over 63</td><td/><td/><td/></tr><tr><td>million (:hara('.t(:rs o[ news lines was acquired [rom</td><td/><td/><td>ling a great ileal of</td></tr><tr><td>China. l)u(~ t(/ (:ultural difl'(:r(m(:(:s of tim two st)-</td><td colspan=\"3\">time s(,,arching for ram-existent; targets. To over-</td></tr><tr><td>(:i('t;ios, there arc many words en(:(nmt(~r(:(1 in th(:</td><td colspan=\"3\">come this problem, I;11(', following measur(',s arc,</td></tr><tr><td>(:()rpllS t)llt II()t in t:h(~ lexi(:on, rl'h(! lal;t(!r must</td><td colspan=\"3\">takc, n to organize tim h:xicon for fast s(:m'(:h:</td></tr><tr><td>t, heretbre lie em'ichcd 1)efor(~ it can 1)e a t)pli(:d 1:(/ t)(wt'orln the lexical a.nalysis. The tits( st, el/ t()-wa,r(ls this end is to merge a h~xi(:(m l/ut)lishcd in China into this one, in(:r(',asing the numt)(u' of word</td><td/><td colspan=\"2\">\u2022 All sinp;h; (:ha.la.c.t,(:r w()t'(t,q a,l.O, sI;or(;d ill ;-/ I;a-ble of 32768 bins. Since tilt; itll;Cl'llld cod(: Of a cha.rat'tcr takes 2 bytes, bits l-15 m'e used as th(! bin address for the, wor:l.</td></tr><tr><td/><td/><td colspan=\"2\">\u2022 All 2-charat't(',r words are stored ill a se, parat(;</td></tr><tr><td/><td/><td colspan=\"2\">tabh: of 655\"{6 bins. 'I'll(', two low order bytes</td></tr><tr><td>Segmentation Algorithm</td><td/><td colspan=\"2\">of the two (:hara(:ttn's arc used as a short iw t:(:g(',l\" for bin address. Should t]mrt~ be other</td></tr><tr><td>Tllc t)rot/os(:d algorithm of this t)al&gt;(:r makes use (t['</td><td/><td colspan=\"2\">words (:ont(',sting for the, same biu, they a, re</td></tr><tr><td>a f(/rward ma.ximmn matching st, ra.t(;gy to i(hultify</td><td/><td colspan=\"2\">kept in a linked list.</td></tr><tr><td>w()r([s, In this r(:sl)(~(:l; ~ this algorithm is a struc-tural atll)roa(:h. (hMer this sl;ratcgy, errors are, usually a.ssot;iated with singh',-(:haract(~r words, ill th('~ first (:hm'a,(:ter (if a litm is i(hmtili(~d ns a single-(:haract(~r word, what it nlcans is that; ther(~ is no multi-character word entry in the l(~xi(:on th;d; starts with such a chara(:tcr. In that case, there is not much on(, can do about it,. On the other hand, when a character is khmtifie, d as a single-cha.ra(:tcr word fl following another word (t in th(: line, one (:annot he, ltl wondca.'ing whether tim sole chm'acter</td><td/><td colspan=\"2\">\u2022 Any 3-ttha.ra, cl;t;r word is split into a 2-(',ha.ra,('.t(;r pre[ix and a, i[-chara(:ter sutlix. The prt!lix will tm si,ored in the bin tabh: for 2-(']lar~l(:t(',r words with (:lear indi(:ation of its l)rcfix st&amp;(liB. Thc Sill[ix will bc stored in the bin table for l-(:harac, t(:r words, again, wiLh clear indication of its suffix status. All (tut/li-(;ate entries are coral)trier1, i.e., if (~ is a word as well as a suflix, tilt; two entries arc com-bined into one with a,n indication that it; can serve as a word as well as a suffix.</td></tr><tr><td/><td/><td colspan=\"2\">The usage frequency of a word differs greatly from</td></tr></table>",
"text": "binding force of a. wor(l is a. rues.sure of how strongly the charact(',rs conll)osing th(,, word are bound t()g(~ther as a single unit;. This for(x: is oL ten equated to tim usage fr(~qu(mcy of the word.",
"html": null,
"type_str": "table"
}
}
}
}