Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C94-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:49:01.037318Z"
},
"title": "SE(IMENTING A SENTENf,I\u00a2 INTO MOItl)IIEM1,;S USING STNI'ISTIC INFOI{MATION BI,TFWEEN WORI)S",
"authors": [
{
"first": "Shiho",
"middle": [],
"last": "Nobesawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
},
{
"first": "Junya",
"middle": [],
"last": "Tsutsumi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
},
{
"first": "Tomoaki",
"middle": [],
"last": "Nitta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
},
{
"first": "Kotaro",
"middle": [],
"last": "One",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
},
{
"first": "Da",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
},
{
"first": "M~lsakazu",
"middle": [],
"last": "Nakanishi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Keio University",
"location": {
"addrLine": "N;tkanishi L;d"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper is on dividing non-separated language sentences (whose words are not separated from each other with a space or other separaters) into morphemes using statistical information, not grammatical information which is often used in NLP. In this paper we describe our method and experimental result on Japanese and Chinese se~,tences. As will be seen in the body of this paper, the result shows that this systent is etlicient for most of tile sentences.",
"pdf_parse": {
"paper_id": "C94-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper is on dividing non-separated language sentences (whose words are not separated from each other with a space or other separaters) into morphemes using statistical information, not grammatical information which is often used in NLP. In this paper we describe our method and experimental result on Japanese and Chinese se~,tences. As will be seen in the body of this paper, the result shows that this systent is etlicient for most of tile sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "An English sentence has several words and those words are separated with a space, it is e~usy to divide an English sentence into words. I[owever a a apalmse sentence needs parsing if you want to pick up the words in the sentence. This paper is on dividing non-separated language sentences into words(morphemes) without using any grammatical information. Instead, this system uses the statistic information between morphenws to select best ways of segmenting sentences in nonseparated languages. Thinldng about segmenting a sentence into pieces, it is not very hard to divide a sentence using a certain dictionary for that. The problem is how to decide which 'segmentation' the t)est answer is. For examl)le , there must be several ways of segmenting a Japanese sentence written in lliragana(Jal)a,lese alphabet). Maybe a lot more than 'several'. So, to make the segmenting system useful, we have to cot> sider how to pick up the right segmented sentences from all the possible seems-like-scgrne, nted sentences,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION AND MOTIVATION",
"sec_num": "1"
},
{
"text": "This system is to use statistical inforn,ation between morphemes to see how 'sentence-like'(how 'likely' to happen a.s a sentence) the se.gmented string is. To get the statistical association between words, mutual information(MI) comes to be one of the most interesting method. In this paper MI is used to calculate the relationship betwee.n words found ill the given sentence. A corpus of sentences is used to gain the MI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION AND MOTIVATION",
"sec_num": "1"
},
{
"text": "'Fo implement this method, we iml)lemented a system MSS(Morphological Segmentation using Statistical information). What MSS does is to find the best way of segmenting a non-separated language, sentence into morphemes without depending on granamatieal information. We can apply this system to many languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION AND MOTIVATION",
"sec_num": "1"
},
{
"text": ")/[ORPHOLOGICAL ANALYSIS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "~2",
"sec_num": null
},
{
"text": "A morpheme is the smallest refit of a string of characters which has a certain linguistic l/leaning itself. It includes both content words and flmction words, in this l)aper the definition of a morl)heme is a string of characters which is looked u I) in tile dictionary. Morphoh)gical analysis is to: l) recognize the smallest units making up tile given sentellce if the sentence is of a l|on-separated hmguage, divide the sentence into morphenms (automatic segmentation), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What; a Morphological Analysis Is",
"sec_num": "2.1"
},
{
"text": "2) check the morlflmmes whether they are the right units to make up the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What; a Morphological Analysis Is",
"sec_num": "2.1"
},
{
"text": "We have some ways to segment a non-separated sentence into meaningflll morphemes. These three methods exl)lained below are the most popular ones to segment ,I apanese sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "\u2022 The longest-sc'gment method: l~,ead the given sentence fi'om left to right and cut it with longest l)ossible segment. For exampie, if we get 'isheohl' first we look for segments wilich uses the/irst few lette,'s in it,'i' and 'is'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "it is ol)vious that 'i';' is loIlger thall 'i', SO tile system takes 'is' as the segment. Then it tries the s;tllle method to find the segnlents in 'heold' and tinds 'he' and 'old'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "The, least-bunsetsu segmenting m(',thod:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "Get all the possible segmentations of the input sentence and choose the segmentation(s) which has least buusetsu in it.. 'l'his method is to seg:ment Japanese sentence.s, which have content words anti function words together in one bunsetsu most of the time. This method helps not to cut a se, ntenee into too small meaningless pieces.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "Lettm'-tyl)e, segmenting method:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "In Japanese language we have three kinds of letters called Iliragana, Katakana and Kanji. This method divides a Japanese sentence into meaningful segments checking the type of letters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmenting Methods",
"sec_num": "2.2"
},
{
"text": "When we translate an English sentence into another language, the easiest way is to change the words in the sentence into the corresponded words in the target language. It is not a very hard job. All we have to do is to look up the words in the dictionary, flowever when it comes to a non-separated language, it is not as simple. An non-separated language does not show the segments included in a sentence. For example, a Japanese sentence does not have any space between words. A Japanese-speaking person can divide a Japanese sentence into words very easily, however, without arty knowledge in Japanese it is impossible. When we want a machine to translate an non-separated language into another language, first we need to segment the given sentence into words. Japanese is not the only language which needs the morphological segmentation. For example, Chinese and Korean are non-separated too. We can apply this MSS system to those languages too, with very simple preparation. We do not have to change the system, just prepare the corpus for the purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Necessity of Morphological Analysis",
"sec_num": "2.3"
},
{
"text": "The biggest problems through the segmentation of an non-separated language sentence are the ambiguity and unknown words. , and each Kanji letters has its own meanings. We can put several Kanji letters to one lliragana word. This makes morphological analysis of Japanese sentence very difficult. A Japanese sentence can have more than one morphological segmentation and it is not easy to figure out which one makes sense. Even two or nlore seglnentation can be 'correct' lbr one sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of Morphological Analysis",
"sec_num": "2.4"
},
{
"text": "To get the right segmentation of a sentence one may need not only morphological analysis but also semantic analysis or grammatical parsing. In this paper no grammatical information is used arid MI between morphemes becomes the key to solve this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of Morphological Analysis",
"sec_num": "2.4"
},
{
"text": "rio deal with unknown words is a big problem in natural language processing(NLP) too. To recognize unknown segments in tim sentences, we have to discuss the likelihood of tim unknown segment being a linguistic word. In this pal)er unknown words are not acceptable as a 'morpheme'. We define that 'morpheme' is a string of characters which is registered in the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of Morphological Analysis",
"sec_num": "2.4"
},
{
"text": "CALCULATING TIlE SCORES OF SENTENCES",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3",
"sec_num": null
},
{
"text": "When the system searches the ways to divide a sentence into morphemes, more than one segmentation come out most of the time. What we want is one (or more) 'correct' segmeutation and we do not need any other possibilities. If there arc many ways of seg-,nenting, we need to select the best one of them. For that purpose the system introduced the 'scores of sentences'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scores of Sentences",
"sec_num": "3.1"
},
{
"text": "A mutual information(MI)[1][2][3]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "is tile information of the ~ussociation of several things. When it comes to NLI', M I is used I.o see the relationship between two (or more) certain words. The expression below shows the definition of the MI for NI, P:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "l'(wl, w2) Ml(wt ;w2) = 1o9 l'(Wl )P(w2) (t) lo i : a word P(wi) :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "the probability wl appears in a corpus P(wl ,w,2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": ":",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "the probability w~ and 'w2 comes out together in a corpus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "Tiffs expression means that when wl and w.2 has a strong association between them, P(wt)P(w~) << P(wt,w2) i.e. MI(wl,w2) >> 0. When wl and w~ do not have any special association, P(w,)P(w.a) P(wl,w2) i.e. Ml(wl,'w2) ~ O. And wl,en wx and w2 come out together very rarely, P(wl)P(w2) >> ,'(~,,, ,,,~) i.e. M X(w,,,~,~) << 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information",
"sec_num": "3.2"
},
{
"text": "Using the words in the given dictionary, it is easy to make up a 'sentence'. llowever, it is hard to consider whether the 'sentence' is a correct one or not. The meaning of 'correct sentence' is a sentence which makes sense. For example, 'I am Tom.' can make sense, however, 'Green the adzabak arc the a ran four.' is hardly took ms a meaningful sentence. 'Fhe score is to show how 'sentence-like' the given string of morphemes is. Segmenting ~t non-sel)arated language sentence, we often get a lot of meaningless strings of morphemes. To pick up secms-likc-mea,fingfid strings from the segmentations, we use MI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating the Score of a Sentence",
"sec_num": null
},
{
"text": "Actually what we use in tim calculation is not l, he real MI described in section 3.2. The MI expression in section 3.2 introduced the bigrams. A bigram is a possibility of having two certain words together in a corpus, as you see in the expression(l). Instead of the bigram we use a new method named d-bigram here in this paper [3] .",
"cite_spans": [
{
"start": 329,
"end": 332,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Calculating the Score of a Sentence",
"sec_num": null
},
{
"text": "The idea of bigrams and trigraiT~s are often used in the studies on NLP. A bigram is the information of the association between two certain words and a trigram is the information among three. We use a new idea named d-bigram in this paper [3] . A d-bigram is the possibility that two words wt and w2 come out together at a distance of d words in a corpus. For example, if we get 'he is Tom' as input sentence, we have three d-bigram data: ('he' 'is' 1) ('is' 'Tom' 1) ('he' 'Tom' 2) ('he' 'is' 1) means the information of the association of the two words 'tie' and 'is' appear at the distance of 1 word in the corpus. 2) Give a certain weight accordiug to the distance, d",
"cite_spans": [
{
"start": 239,
"end": 242,
"text": "[3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "D-bigram",
"sec_num": "3.3.1"
},
{
"text": "to all those Mid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.4",
"sec_num": null
},
{
"text": "3) Sum up those 3~7~. The sum is the score of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.4",
"sec_num": null
},
{
"text": "Church and llanks said in their pN)er [1] that the information between l.wo remote wo,'ds h~s less meaning in a sentence when it comes to the semantic analysis. According to the idea we l)ut d 2 in the expression so that nearer pair can be more effective in calculating the score of the sentence.",
"cite_spans": [
{
"start": 38,
"end": 41,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3.4",
"sec_num": null
},
{
"text": "Tns SYSTSM MSS",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4",
"sec_num": null
},
{
"text": "Overview M,qS takes a lliragana sentence as its input. First, M,qS picks Ul) the morphemes found ill the giwm sentence with checking the dictionary. The system reads the sentence from left to rigltt, cutting out every possibility. Each segment of the sentence is looked up in the dictionary and if it is found in the dictionary the system recognize the segnlent as a morpheme. Those morphemes are replaced by its corresponded Kanji(or lliragana, Katakana or mixed) morpheme(s). As it is tohl in section 2.4, a lliragana morpheme can have several corresponded l(anji (or other lettered) morphemes. In that case all the segments corresponded to the found l|iragana morpheme, are memorized as morl)hemes found in the sentence,. All the found morphemes are nunfl)ered by its position in the sentence. After picking Illl all the n,orphenu.'s in I.he sentence the system tries to put them together mtd brings them up back to sentence(tat)h~ I).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.1",
"sec_num": null
},
{
"text": "[nl)ut a lliragana sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.1",
"sec_num": null
},
{
"text": "Cut out t, he morphemes. Compare. the scores of all the. made-up sentences and get the best-marked one as the most 'sentence-like' sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.1",
"sec_num": null
},
{
"text": "Then the system compares those sentences made up with found morl)he.mes and sees which one is the most 'sentence-like'. For that purpose this system calculate the score of likelihood of each sentences(section 3.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.1",
"sec_num": null
},
{
"text": "A corpus is a set of sentences, These sentences are of target language. For example, when we apply this system to Japanese morphological analysis we need a corpus of Japanese sentences which are already segmented. The corpus prepared for the paper is the translation of English textbooks for Japanese junior high school students. The reason why we selected junior high school textbooks is that the sentences in the textbooks are simple and do not include too many words. This is a good environment for evaluating this system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "4.2"
},
{
"text": "The dictionary for MSS is made of two part. One is the heading words and the other is the morphemes corresponded to the headings. There may be more than one morphemes attached to one heading word. The second part which has morphemes is of type list, so that it can have several morphemes. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Dictionary",
"sec_num": "4.3"
},
{
"text": "Implement MSS to all input sentences and get the score of each segmentation. After getting the list of segmentations, look for the 'correct' segmentedsentence and see where in the list tile right one is. The data shows the scores the 'correct' segmentations got (table 2) . The table 2 shows that most of the sentences, no matter whether the sentences are in the. corpus or not, are segmented correctly. We find the right segmentation getting the best score in the list of possible segmentations, c~ is tile data when the input sentences are in corpus. That is, all the 'correct' morphemes have association between each other. That have a strong effect in calculating the sco,'es of sentences. The condition is almost same for fl and 7. Though the sentence has one word replaced, all other words in the sentence have relationship between them. Tim sentences in 7 inelude one word which is not in the corpus, but still tile 'correct' sentence can get the best score among the possibilities. We can say that the data c~, fl and 7 are very successfld.",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 271,
"text": "(table 2)",
"ref_id": "TABREF2"
},
{
"start": 274,
"end": 285,
"text": "The table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "llowever, we shouhl remember that not all the sentences in the given corpus wouht get the best score through the list. MSS does trot cheek the corpus itself when it calculate the score, it just use the Mid, the essential information of the corpus. That is, whether the input sentence is written in the corpus or not does not make any effect in calculating scores directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "Ilowever, since MSS uses Mid to calculate the. scores, the fact that every two morphemes in the sentence have connection between them raises the score higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "When it comes to the sentences which are not in corpus themselves, the ratio that the 'correct' sentence get the best score gets down (see table 2, data ~, e).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "The sentences of 6 and g are not found in the corpus. Even some sentences which are of spoken language and not grammatically correct are included in the input sentences. It can be said that those ~ and e sentences arc nearer to the real worhl of Japanese language. For ti sentences we used only morphemes which are in the corpus. That means that all tim morphenres used in the 5 sentences have their own MI,I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "And e sentences have both morphemes it( the corpus and the ones not in the corpus. The morphemes which arc not in the corpus do not have any Ml(l. Table 2 shows that MSS gets quite good result eve(, though the input sentences arc not in the corpus. MSS do not take the necessary information directly from the co> pus and it uses the MIa instead. This method makes the information generalize.d and this is the reason why 5 and e can get good results too. Mid comes to }>e the key to use the effect of the MI between morphemes indirectly so that wc can put the information of the mssoeiation between morphemes to practical use. This is what we expected and MSS works successfldly at this point.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "In this paper we used the translation of English text: books for Japanese junior high school students. Primary textbooks are kiud of a closed worhl which have limited words in it an<l the included sentences are mostly in some lixed styles, in good graummr. The corpus we used in this pal)er has about 630 sentences which have three types of Japanese letters all mixed. This corpus is too small to take ms a model of the ,'eal world, however, for this pal>e( it is big enough. Actually, the results of this paper shows that this system works efficiently even though the corpus is small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "5.2"
},
{
"text": "The dictionary an<l the statistical information are got from the given corpus. So, the experimental re= suit totally depends on the corpus. That is, selecting which corpus to take to implement, we can use I.his system ill many purposes(section 5.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Corpus",
"sec_num": "5.2"
},
{
"text": "It is not easy to compare this system with other seg-,nenting methods. We coral)are with tile least-bunsetsu method here ill this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the Other Methods",
"sec_num": "5.3"
},
{
"text": "The least-bunselsv method segment the given sentences into morphemes and fin(l the segmentations with least bunselsu. This method makes all the segmentation first an(l selects the seems-like-best segmentations. This is the same way MSS does. The difference is that the least-bdnsetsv method checkes the nmnber of tile bumselsu instead of calculating the scores of sen(elites. Let us think about implementing a sentence the morl)hcmes are l,ot in the dictionary. That means that the morphemes do not have any statistical informations between them. In this situation MSS can not use statistical informations to get the scores. Of course MSS caliculate the scores of sentences accord: ing to tile statistical informations between given morphemes, llowe.ver, all the Ml,l say that they have no association I)etween t]le (~lorpherlles. When there is no possibility that the two morl>hemes appears together ill the corpus, we give a minus score ~s tit('. Ml,t wdue, so, as the result, with more morphemes the score of the+ sentence gets lower. That is, tire segmentation which has less segments ill it gets better scores. Now compare it with the least-bunsetsu method. With using MSS the h.'ast-morpheme segme.ntations are selected as the goo(I answer, q'hat is tile same way the least-bunsetsu method selects the best one. '['his means that MSS and the least-bttnscts.le method have the same efficiency when it comes to the sentences which morl(hemes are not in the corpus. It is obvious that when the sentence has morphemes in the corpus the ellicie.ncy of this systern gets umch higher(table",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the Other Methods",
"sec_num": "5.3"
},
{
"text": "Now it is proved that MSS is, at least, as etli: cicnt as the least-b'unsets'~ nmthod, no matter what sentence it takes. We show a data which describes I.his(tabh~ 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2).",
"sec_num": null
},
{
"text": "\"Fable 3 is a good exanq)le of the c;use whelL the. input sentence has few morphemes which are in the corl)uS. This dal.a shows that in I.his situal.ion I.here is an outstanding relation between the number of morl)hemes and the scores of the segmented se.ntenees. This example (table 3) has an ambiguity how to segment the sentence using the registere(l morphemes, and all the morphemes which causes the alnbiguity are not in the given (:orpus. Those umrl)hemes not in the corpus do not have any statistical information betweel, them and we have no way to select which is bett<.'r. So, the scores of sentences are Ul) to the length of the s<~gmented sentence, that is, the number how many morl)hemes the sentence has. '['he segmented sentence which has least segments gets the best score, since MSS gives a minus score for unknown mssociation between morphemes. That means that with more segments in the sentence the score gets lower. This sit- method selects the answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 277,
"end": 286,
"text": "(table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "2).",
"sec_num": null
},
{
"text": "The theme of tiffs paper is to segment non-separaLe(] language sentences into morphemes. In this paper we described on segmentation of Japanese non-segmented sentences only but we are working on Chinese sentences too. This MSS is not for Japanese only. It can be used for other non-separated languages too. \"lb implement for other languages, we just need to prepare the corpus for that and make up the dictionary from it. llere is the example of implementing MSS for Chinese language (table 4) . The input is a string of characters which shows the pronounciations of a Chinese sentence. MSS changes it into Chinese character senteces, segmenting the given string.",
"cite_spans": [],
"ref_spans": [
{
"start": 484,
"end": 493,
"text": "(table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment in Chinese",
"sec_num": null
},
{
"text": "To implement tiffs MSS system, we only need a eel pus. The dictionary is made from the corpus. This -14.80836 )JI~ {0~ --']~ ~1~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changing the Corpus",
"sec_num": "5.5"
},
{
"text": "-14.80836",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changing the Corpus",
"sec_num": "5.5"
},
{
"text": "gives MSS system a lot of usages and posibilities. Most of the NLP systems need grammatical i,ffof malleus, and it is very hard to make up a certain grammatical rule to use in a NLP. The corpus MSS needs to implement is very easy to get. As it is described in the previous section, a corpus is a set of real sentence.s. We can use IVISS in other languages or in other purposes just getting a certain corpus for that and making up a dictionary from the corpus. That is, MSS is available in many lmrposes with very simple, easy preparation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changing the Corpus",
"sec_num": "5.5"
},
{
"text": "This paper shows that this automatic segmenting system MSS is quite efficient for segmentation of nonseparated language sentences. MSS do not use any grammatical information to divide input sentences. Instead, MSS uses MI l)etween morphenres included in the input sentence to select the best segmentation(s) frorn all the possibilities. According to the results of the experiments, MSS can segment ahnost all the sentences 'correctly'. This is such a remarkable result. When it comes to the sentences which are not in the corpus the ratio of selecting the right segmentation as the best answer get a little bit lower, however, the result is considerably good enough.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "6"
},
{
"text": "The result shows that using Mid between morphemes is a very effective method of selecting 'correct' sentences, aml this means a lot in NLP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing, Word Associations and Typical Predlcate-Argument t{,elations",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Llindle",
"suffix": ""
}
],
"year": 1989,
"venue": "ternational Parsing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Church, William Gale, Patrick lhmks, and Donald llindle. Parsing, Word Associations and Typical Predlcate-Argument t{,elations. In- ternational Parsing Workshop, 1989.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Itow to compile a hilingual collocational lexicon automatically. Statislically-based Natural Language Programming Techniques",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "57--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Smadja. Itow to compile a hilingual collo- cational lexicon automatically. Statislically-based Natural Language Programming Techniques, pages 57--63, 1992.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Multi-Lingual Translation System Based on A Statistical Model(written in Jal)anese)",
"authors": [
{
"first": "Tomoaki",
"middle": [],
"last": "Dunya Tsutsurni",
"suffix": ""
},
{
"first": "Kotaro",
"middle": [],
"last": "Nitta",
"suffix": ""
},
{
"first": "Shlho",
"middle": [],
"last": "One",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nobesawa",
"suffix": ""
}
],
"year": 1993,
"venue": "SIG-PPAI-9302-2",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "dunya Tsutsurni, Tomoaki Nitta, Kotaro One, and Shlho Nobesawa. A Multi-Lingual Transla- tion System Based on A Statistical Model(written in Jal)anese). JSAI Technical report, SIG-PPAI- 9302-2, pages 7-12, 1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Parsing a Natural Language Using Mutual Information Statistics",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Magerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M.Magerman and Mitchell P.Marcus. Pars- ing a Natural Language Using Mutual Information Statistics. AAAI, 1990.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Statist, ieal Approach to Language Translation. l'roc, of COLING-88",
"authors": [
{
"first": "",
"middle": [],
"last": "It",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"Della"
],
"last": "Cocke",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "71--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "It.Brown, J.Cocke, S.Della Pietra, V.Della Pietra, F.Jelinek, R.Mercer, and P.Roossin. A Statist, i- eal Approach to Language Translation. l'roc, of COLING-88, pages 71-76, 1989.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "CalculationThe expression to calculate the scores between two words is[3]:t'(wl, w~, d) Mid(w1, w,2, d) = 1o9~~ (2)lu i : ;t word d : distance of the two words Wl and w2 P(wi) : the possibility the wm'd wl appears in the coq)us P(wl,w2,d) : the possibility wl and w2 eoll'le out d words away fl'om each other in the corpus As the value of Mid gets bigger, the more those words have the ,association. And the score of a sentence is calculated with these Mid data(expression(2)). The definition of the sentence score is[l]: ia(W)= 9 9 Mia(wi,w'+ d,d) Of Wol'ds ill tile SelttellCe I~ll : it selttence wi : The i-th morpheme in the sentence I~V This expression(3) calculates the scores with the algoritlmt below: 1) Calculate Mld of every pair of words included in the given sentence.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Make up sentences with the morphemes.",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Calculate the score of sentences using the mutual information.",
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"num": null,
"text": "g",
"type_str": "figure",
"uris": null
},
"FIGREF4": {
"num": null,
"text": ")\" 78)('|77-\" 89) ('l,'\" 910) (\"R\"a\" 911) ('~'~\" 911)('~\" 1112))",
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"num": null,
"text": "tiny\" (\" ~,, .... ~t\")) heading word morpherne~5",
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"num": null,
"text": "Experiment in Chinese input : nashiyizhangditu. correct answer output sentences scores -~ )J[~ ~: --,~ ;t~. 15.04735 )Jl~ ~! --~ .tt~].",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "MSS example"
},
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Experiment in Japanese</td></tr><tr><td>corpus</td><td/><td colspan=\"2\">about 630 J~tp,'tnese sentences</td></tr><tr><td/><td/><td colspan=\"2\">(with three kinds of letters mixed)</td></tr><tr><td colspan=\"2\">dictionary</td><td colspan=\"2\">about 1500 heading words</td></tr><tr><td/><td/><td>(includes morphemes</td><td/></tr><tr><td/><td/><td>not in tile corpus)</td><td/></tr><tr><td>input</td><td/><td colspan=\"2\">lion-segmented Ja.p;~nese selltences</td></tr><tr><td/><td/><td>using lllragana only</td><td/></tr><tr><td colspan=\"2\">number of</td><td/><td/></tr><tr><td colspan=\"2\">input sentence</td><td>about 100 e~tch</td><td/></tr><tr><td colspan=\"2\">distance limit</td><td>5</td><td/></tr><tr><td colspan=\"2\">~ -V~score</td><td colspan=\"2\">2nd best T ~ 3rd best</td></tr><tr><td>a</td><td>99%</td><td>100%</td><td>100%</td></tr><tr><td/><td>loo%</td><td>100 %</td><td>100 %</td></tr><tr><td>7</td><td>100%</td><td>100%</td><td>:100%</td></tr><tr><td/><td>95%</td><td>98 %</td><td>98 %</td></tr><tr><td>E</td><td>80%</td><td>90 %</td><td>95 %</td></tr><tr><td/><td colspan=\"2\">the very sentences in tile corpus</td><td/></tr><tr><td/><td colspan=\"3\">replaced one rnorllheme in the sentence</td></tr><tr><td/><td colspan=\"3\">(the buried morpheme is in the corpus)</td></tr><tr><td/><td colspan=\"3\">replaced one morpheme in the sentence</td></tr><tr><td/><td colspan=\"3\">(tile buried morpbeme is not in the corpus)</td></tr><tr><td/><td colspan=\"2\">sentences not in the corpus</td><td/></tr><tr><td/><td colspan=\"3\">(the morphemes are all in tim corpus)</td></tr><tr><td/><td colspan=\"2\">sentences not in the corpus</td><td/></tr><tr><td/><td colspan=\"3\">(include morphemes not; in the corpus)</td></tr><tr><td>5.1</td><td>Ext)eriment</td><td>in Japanese</td><td/></tr><tr><td colspan=\"4\">According to the experimental results(table 2), it is</td></tr><tr><td colspan=\"3\">obvious that MSS is w.'ry useful.</td><td/></tr></table>",
"text": ""
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"4\">input : a non-segmented</td><td/><td/><td/></tr><tr><td/><td colspan=\"6\">Japanese tliragana sentence</td><td/></tr><tr><td/><td colspan=\"4\">not in the corpus</td><td/><td/><td/></tr><tr><td/><td colspan=\"7\">all unknown morphemes in the sentence</td></tr><tr><td/><td colspan=\"6\">are registered in the (lictionary</td><td/></tr><tr><td/><td colspan=\"6\">(some morphemes in the corpus</td><td/></tr><tr><td/><td/><td colspan=\"3\">are included)</td><td/><td/><td/></tr><tr><td>\"</td><td>sumomo</td><td>mo</td><td>nlonlo</td><td>hie</td><td>memo</td><td>no</td><td>ilCh]</td></tr><tr><td/><td>the number of</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>the morphemes</td><td/><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td></tr><tr><td/><td>the scores of</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>the sentences</td><td/><td>-65,0</td><td>-79.6</td><td>-9,1.3</td><td>-108.9</td><td>-123.5</td></tr><tr><td/><td>the number of</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>tile segmented</td><td/><td>5</td><td>20</td><td>21</td><td>8</td><td>1</td></tr><tr><td/><td>sentences</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>tile tcorrectl</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>segmentation</td><td/><td/><td>~k\"</td><td/><td/><td/></tr><tr><td/><td>MSS</td><td/><td>O</td><td/><td/><td/><td/></tr><tr><td/><td>tile least-</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>bunsetsu</td><td/><td>0</td><td/><td/><td/><td/></tr><tr><td/><td>method</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"4\">morphemes included</td><td>:</td><td colspan=\"2\">\" \u00a9 ....</td><td>~2 \"</td></tr><tr><td colspan=\"2\">in the corpus</td><td/><td/><td>:</td><td colspan=\"2\">\" no ....</td><td>Ill(lllO \"</td></tr><tr><td colspan=\"5\">morphemes not included :</td><td colspan=\"2\">\" IAI ....</td><td>~4!. ~ \"</td></tr><tr><td colspan=\"2\">in the corpus</td><td/><td/><td>:</td><td colspan=\"2\">\" uchi ....</td><td>sunm \"</td></tr><tr><td/><td/><td/><td/><td/><td>\" sumomo \" ~t\"</td><td>\"</td><td>*' hie j~</td></tr><tr><td/><td/><td/><td/><td/><td>'P nlOUlO</td><td>~p</td><td/></tr><tr><td colspan=\"8\">uation is resemble to the way how the least-bunseisu</td></tr></table>",
"text": "MSS and The least-bvnselsu method"
}
}
}
}