|
{ |
|
"paper_id": "O05-4005", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:58:39.786903Z" |
|
}, |
|
"title": "Chinese Word Segmentation by Classification of Characters", |
|
"authors": [ |
|
{ |
|
"first": "Chooi-Ling", |
|
"middle": [], |
|
"last": "Goh", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Nara Institute of Science and Technology", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Masayuki", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Nara Institute of Science and Technology", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Yuji", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Nara Institute of Science and Technology", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "During the process of Chinese word segmentation, two main problems occur: segmentation ambiguities and unknown word occurrences. This paper describes a method to solve the segmentation problem. First, we use a dictionary-based approach to segment the text. We apply the Maximum Matching algorithm to segment the text forwards (FMM) and backwards (BMM). Based on the difference between FMM and BMM, and the context, we apply a classification method based on Support Vector Machines to reassign the word boundaries. In so doing, we use the output of a dictionary-based approach, and then apply a machine-learning-based approach to solve the segmentation problem. Experimental results show that our model can achieve an F-measure of 99.0 for overall segmentation, given the condition that there are no unknown words in the text, and an F-measure of 95.1 if unknown words exist.", |
|
"pdf_parse": { |
|
"paper_id": "O05-4005", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "During the process of Chinese word segmentation, two main problems occur: segmentation ambiguities and unknown word occurrences. This paper describes a method to solve the segmentation problem. First, we use a dictionary-based approach to segment the text. We apply the Maximum Matching algorithm to segment the text forwards (FMM) and backwards (BMM). Based on the difference between FMM and BMM, and the context, we apply a classification method based on Support Vector Machines to reassign the word boundaries. In so doing, we use the output of a dictionary-based approach, and then apply a machine-learning-based approach to solve the segmentation problem. Experimental results show that our model can achieve an F-measure of 99.0 for overall segmentation, given the condition that there are no unknown words in the text, and an F-measure of 95.1 if unknown words exist.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The first step in Chinese information processing is word segmentation. This is because in written Chinese, all characters are joined together, and there are no separators to mark word boundaries. A similar problem also occurs with languages like Japanese, but at least with Japanese, there are three types of characters (hiragana, katakana and kanji). This helps provide clues for finding word boundaries. In the case of Chinese, as there is only one type of character (hanzi), more segmentation ambiguities may occur in a text. During the process of segmentation, two main problems are encountered: segmentation ambiguities and unknown word occurrences. This paper focuses on solving the segmentation ambiguity problem and proposes a sub-model to solve the unknown word problem. There are basically two types of segmentation ambiguity: covering ambiguity and overlapping ambiguity. The definitions are given below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Let x, y, z be some strings which could consist of one or more Chinese characters. Assuming that W is a given dictionary, the covering ambiguity is defined as follows: For a string w = xy, x \u2208 W, y \u2208 W, and w \u2208 W. As almost any single character in Chinese can be considered as a word, the above definition reflects only those cases where both word boundaries .../xy/... and .../x/y/... can be found in sentences. On the other hand, overlapping ambiguity is defined as follows: For a string w = xyz, both w 1 = xy \u2208 W and w 2 = yz \u2208 W hold. Although most of the time, one form of segmentation is preferred over the other, we still need to know about the contexts in which the other form is used. Both types of ambiguity require that the context be considered to decide which is the correct segmentation form given a particular occurrence in the text. 1aand 1bshow examples of covering ambiguity. The string \"\u4e00\u5bb6\" is treated as a word in (1a) but as two words in (1b). On the other hand, (2a) and (2b) are examples of overlapping ambiguity. The string \"\uf967 \u53ef\u4ee5\" is segmented as \"\uf967/\u53ef\u4ee5\" in (2a) and as \"\uf967\u53ef/\u4ee5\" in (2b), according to the context in each sentence. (2a)\uf967/\u53ef\u4ee5/\u6de1\u5fd8/\u8fdc\u5728/\u6545\u4e61/\u7684/\u7236\u6bcd/ not/ can/ forget/ far away/ hometown/ DE/ parents/ (Cannot forget parents who are far away at home) (2b)\uf967\u53ef/\u4ee5/\u8425\uf9dd/\u4e3a/\u76ee\u7684/ cannot/ by/ profit/ be/ intention (Cannot have the intention to make a profit)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "We intend to solve the ambiguity problems by combining a dictionary-based approach with a statistical model. In so doing, we make use of the information in a dictionary in a statistical approach. The Maximum Matching (MM) algorithm, a very early and simple dictionary-based approach, is used to initially segment the text by referring to a dictionary. It tries to match the longest possible words found in the dictionary. We can parse a sentence either forwards or backwards. Normally, the differences between the results of forward and backward parsing will indicate the locations where overlapping ambiguities occur. Then, we use a Support Vector Machine-based (SVM) classifier to decide which output should be the correct answer. As for covering ambiguities, in most cases, forward and backward MM will give the same output. In this case, we just make use of the contexts to decide whether or not to split a word into two or more words. Our experimental results show that the proposed method can solve 92% of overlapping ambiguities and 52% of covering ambiguities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "Solving the ambiguity problems is a fundamental task in Chinese segmentation process. Although many previous researches have focused on segmentation, only a few have reported on the accuracy achieved in solving ambiguity problems. Li et al. [2003] proposed an unsupervised method for training Na\u00efve Bayes classifiers to resolve overlapping ambiguities. They achieved 94.13% accuracy in 5,759 cases of ambiguity. An alternative form of TF.IDF weighting was proposed for solving the covering ambiguity problem in [Luo et al. 2002] . They focused on 90 ambiguous words and achieved an accuracy of 96.58%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 247, |
|
"text": "Li et al. [2003]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 511, |
|
"end": 528, |
|
"text": "[Luo et al. 2002]", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Works", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Most of the previous methods reported on the accuracy of overall segmentation. Recently, many researches have adopted multiple models. Furthermore, most researchers have realized that character-based approaches are more effective than word-based approaches to Chinese word segmentation. In [Xue and Converse 2002] , two classifiers were combined to perform Chinese word segmentation. First, a Maximum Entropy model was used to segment the text, and then an error driven transformation model was used to correct the word boundaries. Their method also used character-based tagging to assign the positions of characters in words. They achieved an F-measure of 95.17 using the Penn Chinese Treebank. Another recent study was that of Fu and Luke [2003] , who proposed hybrid models for integrated segmentation. Modified word juncture models and word-formation patterns were used to find word boundaries and at the same time to identify unknown words. They achieved and F-measure of 96.1 using the Peking University Corpus. As the above studies used different corpora in their experiments, it is difficult to tell which method performed better.", |
|
"cite_spans": [ |
|
{ |
|
"start": 290, |
|
"end": 313, |
|
"text": "[Xue and Converse 2002]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 747, |
|
"text": "Fu and Luke [2003]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Works", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "Solving the unknown word problem is also an important step in word segmentation. An unknown word is a word not found in a dictionary. Therefore, it cannot be segmented correctly by simply referring to the dictionary. Many approaches for unknown word detection have been proposed [Chen and Bai 1997; Chen and Ma 2002; Fu and Wang 1999; Lai and Wu 1999; Ma and Chen 2003; Nie et al. 1995; Shen et al. 1998; Zhang et al. 2002; Zhou and Lua 1997] . These include rule-based, statistics-based, and hybrid models. We cannot ignore the unknown word problem since there are always some unknown words (such as person names, numbers etc.) in a text even when we use a very large dictionary. The creation of new words in Chinese is a continuous process. For example, names for new diseases, technical terms, and new expressions are always being created. The accuracy is better if one focuses only on certain types of unknown words such as person names, place names, or transliteration names, when accuracy of over 80% can be achieved. However, for general unknown words, such as common nouns, verbs etc., the accuracy ranges from only 50% to 70%.", |
|
"cite_spans": [ |
|
{ |
|
"start": 279, |
|
"end": 298, |
|
"text": "[Chen and Bai 1997;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 316, |
|
"text": "Chen and Ma 2002;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 334, |
|
"text": "Fu and Wang 1999;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 335, |
|
"end": 351, |
|
"text": "Lai and Wu 1999;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 369, |
|
"text": "Ma and Chen 2003;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 386, |
|
"text": "Nie et al. 1995;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 387, |
|
"end": 404, |
|
"text": "Shen et al. 1998;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 405, |
|
"end": 423, |
|
"text": "Zhang et al. 2002;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 442, |
|
"text": "Zhou and Lua 1997]", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Works", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "We propose a method that uses only minimum resources, meaning that only a segmented corpus is required. The underlying concept of our proposed method is as follows. We regard the problem as a character classification problem. We believe that each character in Chinese tends to appear in certain positions in words. A character can be used at the beginning of a word, in the middle of a word, at the end of a word, or as a single-character word. It can appear at different positions in different words. By looking at the usage of the characters, we can decide on their position tags using a machine learning based model, which in our case is the Support Vector Machines model [Vapnik 1995] . Our method employs a model to solve the ambiguity problem and, at the same time, embeds a model to detect unknown words. We will next describe the method in more detail in the following section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 675, |
|
"end": 688, |
|
"text": "[Vapnik 1995]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We intend to solve the ambiguity problem by combining a dictionary-based approach with a statistical model. The Maximum Matching (MM) algorithm is regarded as the simplest dictionary-based word segmentation approach. It starts from one end of a sentence and tries to match the first longest word wherever possible. It is a greedy algorithm, but it has been empirically proved to achieve over 90% accuracy if the dictionary used is large. However, the ambiguity problem cannot be solved effectively, and it is impossible to detect unknown words because only those words existing in the dictionary can be segmented correctly. If we look at the outputs produced by segmenting the sentence forwards (FMM), from the beginning of the sentence, and backwards (BMM), from the end of the sentence, we can determine the places where overlapping ambiguities occur. For example, FMM will segment the string \"\u5373\u5c06\u6765\u4e34 \u65f6\" (when the time comes) into \"\u5373\u5c06/\u6765\u4e34/\u65f6/\"(immediately/ come/ when), but BMM will segment it into \"\u5373/\u5c06\u6765/\u4e34\u65f6/\"(that/ future/ temporary).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Matching Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Let O f and O b be the outputs of FMM and BMM, respectively. According to Huang [1997] , for overlapping cases, if O f = O b , then the probability that both the MMs will be the correct answer is 99%. If O f \u2260O b , then the probability that either O f or O b will be the correct answer is also 99%. However, for covering ambiguity cases, even if O f = O b , both O f and O b could be correct or could be wrong. If there exist unknown words, they normally will be segmented as single characters by both FMM and BMM. Based on the differences and contexts created by FMM and BMM, we apply a machine learning based model to re-assign the position tags which indicate character positions in words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 86, |
|
"text": "Huang [1997]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Maximum Matching Algorithm", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Support Vector Machines (SVM) [Vapnik 1995] are binary classifiers that search for a hyperplane with the largest possible margin between positive and negative samples (see Figure 1 ). Suppose we have a set of training data for a binary class problem:", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 43, |
|
"text": "[Vapnik 1995]", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 172, |
|
"end": 180, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(x 1 , y 1 ),\u2026, (x N , y N ),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where x i \u2208 R n is the feature vector of the ith sample in the training data and y i \u2208{+1, -1} is its label. The goal is to find a decision function which accurately predicts the label y for an unseen x. An SVM classifier gives a decision function f(x) for an input vector x, where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "( ) ( , ) i i i i SV f sign y K b \u03b1 \u2208 \u239b \u239e = + \u239c \u239f \u239d \u23a0 \u2211 Z x x z .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "f(x)= +1 means that x is a positive member, and f(x) = -1 means that x is a negative member.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The vectors z i are called support vectors, and they are assigned a non-zero weight \u03b1 i . Support vectors and the parameters are determined by solving a quadratic programming problem. K(x, z) is a kernel function which computes an extended inner product of input vectors. We use a polynomial kernel function of degree 2, that is, K(x, z) = (1 + x\u22c5 z) 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support Vector Machines", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We use YamCha [Kudo and Matsumoto 2001] to train our SVM models. YamCha is an SVM-based multi-purpose chunker. It extends binary classification to n-class classification for natural language processing purposes, where we would normally want to classify the words into several classes, as in the case of POS tagging or base phrase chunking. Two straightforward methods are mainly used for this extension, the \"one-vs-rest\" method and the \"pairwise\" method. In the \"one-vs-rest\" method, n binary classifiers are used to compare one class with the rest of the classes. In the \"pairwise\" method, ( ) 2 n binary classifiers are used to compare between all pairs of classes. We need to classify the characters into 4 categories (B, I, E or S, as shown in Table 1 ) in our method. We used the \"pairwise\" classification method in our experiments because it is more efficient during the training phase. Details of the system can be found in [Kudo and Matsumoto 2001] . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 39, |
|
"text": "[Kudo and Matsumoto 2001]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 932, |
|
"end": 957, |
|
"text": "[Kudo and Matsumoto 2001]", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 749, |
|
"end": 756, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Figure 1. Maximizing the margin", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We intend to classify the characters using the SVM-based chunker [Kudo and Matsumoto 2001] as described in Section 3.2. [Xue and Converse 2002] proposed to regard the word segmentation problem as a character tagging problem. Instead of segmenting a sentence into word sequences directly, characters are first assigned with position tags. Later, based on these postion tags, the characters are converted into word sequences. The basic features used are the characters. However, the number of examples per feature will be small if there is only character information and no other information is provided. Since there are always more known words than unknown words in a text, it is advantageous if we can segment known words beforehand. Therefore, we supply the outputs from FMM and BMM as some of the features. In this case, the learning by SVM is guided by a dictionary for known word segmentation. The similarities and differences between FMM and BMM are used to train the SVM to solve the segmentation ambiguity problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 90, |
|
"text": "[Kudo and Matsumoto 2001]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 143, |
|
"text": "[Xue and Converse 2002]", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Classification of Characters", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "First, we convert the output of the MMs into a character-wise form, where each character is assigned a position tag as described in Table 1 . The BIES tags are as described in [Uchimoto et al. 2000] and [Sang and Veenstra 1999] for named entity extraction. These tags show possible character positions in words. For example, the character \"\u672c\" is used as a single character word in \"\u4e00/\u672c/\u4e66/\uff02(a book), at the end of a word in \"\u5267\u672c' (script), at the beginning of a word in\"\u672c\u6765\uff02 (originally), or in the middle of a word in \"\u57fa\u672c \u4e0a\uff02(basically).", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 198, |
|
"text": "[Uchimoto et al. 2000]", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 227, |
|
"text": "[Sang and Veenstra 1999]", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 139, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification of Characters", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The solid box in Figure 2 shows the features used to determine the tag of the character \"\u6625\" at location i. In other words, our feature set consists of the characters, the FMM and BMM outputs, and the previously tagged outputs. The context window is two characters on both the left and right sides of the current character. Based on the output position tags, finally, we get the segmentation \"\u8fce/\u65b0\u6625/\u8054\u8c0a\u4f1a/\u4e0a/\uff02 (welcome/ new year/ get-together party/ at/).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 25, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classification of Characters", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Char. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Position", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "FMM BMM Output i-2 \u8fce B S S i-1 \u65b0 E B B i \u6625 B E E i+1 \u8054 E B B i+2 \u8c0a S E I i+3 \u4f1a B B E i+4 \u4e0a E E S", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Position", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We run our experiments with two datasets, the PKU Corpus and the SIGHAN Bakeoff data. The evaluation was conducted using the tool provided in SIGHAN Bakeoff [Sproat and Emerson 2003 ].", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 181, |
|
"text": "[Sproat and Emerson 2003", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments and Results", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "The corpus used for this experiment was provided by Peking University (PKU) 1 and consists of about 1.1 million words. It is a segmented and POS-tagged corpus, but we only used the segmentation information for our experiments. We divided the corpus randomly into two parts consisting of 80% and 20% of the corpus, for training and testing, respectively. Since our purpose in this experiment was only to solve the ambiguity problem, not the unknown word detection problem, we assumed that all the words could be found in the dictionary. We created a dictionary with all the words from the corpus, which had 62,030 entries (referred to as Experiment 1). This experiment was conducted to evaluate the performance of the method in solving the ambiguity problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Accuracy on Solving Ambiguity Problem", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "It is difficult to determine how many ambiguities appear in a sentence. For example, in the sentence shown in Figure 2 , \"\u8fce\u65b0\" (welcome the new year),\"\u65b0\u6625\uff02(new year),\"\u6625 \u8054\uff02(a strip of red paper that is pasted beside a door; on it is written some greeting words to celebrate the new year in China), \"\u8054\u8c0a\" (get-together),\"\u8054\u8c0a\u4f1a\uff02(get-together party),\"\u4f1a \u4e0a\uff02(at the meeting) and\"\u4e0a\uff02(at) are all possible words. A word candidate may cause more than one ambiguity with the alternative word candidates. Therefore, we try to represent the ambiguities by means of character units since our method is character-based. We assign each character to one of these six categories. Table 2 shows the conditions for each category together with the results obtained with the method for solving the ambiguity problem. The categories Allcorrect, Correct, and Match have correct answers, whereas the categories Wrong, Mismatch, and Allwrong have wrong answers. We can roughly say that the categories Correct and Wrong contain overlapping ambiguities, and that the categories Match, Mismatch, and Allwrong contain covering ambiguities. We can also say that Match and Mismatch categories refer to cases where words should be split, whereas Allwrong category refers to cases where words should not be split but the system mistakenly splits them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 118, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 656, |
|
"end": 663, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy on Solving Ambiguity Problem", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Overall, we could correctly tag 99.13% of the characters. If we only consider the overlapping cases (Correct and Wrong), 92.09% of the characters were correctly tagged. As for covering cases, if we look at only those cases where we need to split the words (Match and Mismatch), then 51.91% of them were successfully split. Table 3 shows overall word segmentation results. Compared with the baseline models, namely, FMM, BMM, and SVM (using only characters as features), our proposed method can achieve higher accuracy with an F-measure of 99.0. This means that our method is able to solve the ambiguity problem given information about locations where ambiguities occur by looking at the outputs of FMM and BMM.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 330, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy on Solving Ambiguity Problem", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "The corpus used in this experiment was the same as that described in Section 4.1.1, but the setting is different. In this round, we divided the corpus into three sets, referred to as Set 1, Set 2, and Set 3. Set 1 plus Set 2 (80%) was used for training, and Set 3 (20%) was used for testing, just as in the previous experiment. The difference was in the preparation of the dictionary. It was prepared in two ways. In the first case, all the words from Set 1 and Set 2 were used to create the dictionary. There were 49,433 entries in the dictionary and 8,346 (4.0%) unknown words in the testing data (referred to as Experiment 2). This experiment was conducted to investigate the performance of the method when unknown words exist. In the second case, only the words from Set 1 were used to create the dictionary, resulting in a situation where unknown words existed in the training data (referred to as Experiment 3). The top part of Table 4 shows the proportions of Set 1 and Set 2, along with the sizes of the dictionaries and the numbers of unknown words in Set 2 and Set 3 (the testing data). Set 2 served as a learning model for unknown word detection 2 . When we segmented Set 2 using FMM and BMM, most of the unknown words were segmented into single characters (namely tag 'S'). Based on these tags and contexts, the SVM-based chunker was trained to change the tags into the correct answers. The last experiment (referred to as Experiment 4) was the opposite of Experiment 2; nothing was used to create the dictionary. All the words were considered to be unknown words. Only the characters were used as features during the classification phase, meaning that no information from FMM and BMM was available. Table 4 shows the results obtained in these experiments. Our method in fact worked quite well in solving both the segmentation ambiguity and unknown word detection problems. However, while the accuracy for unknown word detection improved, the performance in solving the ambiguity problem worsened. This is because the precision in unknown word detection was not one hundred percent. False unknown words caused the accuracy of known word segmentation to deteriorate. The highest recall rate that we could get for known words was 98.9% (as in model 80/0) and that for unknown words was 69.3% (as in model 80/0). However, the best overall segmentation result was achieved by dividing the training corpus in half (as in model 40/40), and the result was an F-measure of 95.1. This is the optimal point where a balance is found between detecting unknown words and at the same time maintaining accuracy in the segmentation of known words. Figure 3 shows the F-measure results for segmentation and recall results for unknown words and known words, when different proportions of the training corpus were used to create the dictionary. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 934, |
|
"end": 941, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1712, |
|
"end": 1719, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 2644, |
|
"end": 2652, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Accuracy in Solving the Unknown Word Problem", |
|
"sec_num": "4.1.2" |
|
}, |
|
{ |
|
"text": "As far as we know, there is no standard definition of Chinese word segmentation. A text can be segmented differently depending on the linguists who decide on the rules and also the purpose of segmentation. Therefore, it is always difficult to compare the results obtained with different methods as the data used is different. The First International Chinese Word Segmentation Bakeoff [Sproat and Emerson 2003] intended to evaluate the accuracy of different segmenters by standardizing the training and testing data. In their closed test, only the training data were used for training and no other material. Under this strict condition, it is possible to create a lexicon from the training data, but, of course, unknown words will exist in the testing data. We conducted an experiment using the bakeoff data. Since our system works only on two-byte coding, some ascii code in the data, especially numbers and letters, are converted to GB code or Big5 code prior to processing. The obtained distribution of the data is shown in Table 5 . The original dictionaries consisted of all the words extracted from the training data. Some of the unknown words automatically became known words after ascii code was converted to GB/Big5 code. The conversion step reduced the number of unknown words. For example, if the number \"\uff11\uff19\uff19\uff18\" written in GB code existed in the training data but it was written in ascii code as \"1998\" in the testing data, then it was treated as an unknown word at the first location. Following conversion, it became a known word. The experimental setup was similar to that in Experiment 3 above. In Experiment 3, based on our previous experiments, using half of the training corpus to create the dictionary generated the best F-measure result. Therefore, only about 50% (first half) of the training corpora were used to create the dictionaries 3 . As a result, the new dictionaries contained fewer entries than the original dictionaries. Table 5 shows the details for the setting. As observed in [Sproat and Emerson 2003 ], none of the participants of the bakeoff could get the best results for all four tracks. Therefore, it is quite difficult to compare accuracy across different methods. Our results are shown in Table 6 . Comparing with the bakeoff results, one can see that our results are not the best, but they are among the top three best results, as shown at the top of Figure 4 . During the bakeoff, only two participants took part in all four tracks in the closed test. We obtained better results than one of them [Asahara et al. 2003] , where a similar method was used to re-assign word boundaries. The difference is that words are first categorized into 5 or 10 classes (which are assumed to be equivalent to POS tags) using the Baum-Welch algorithm, and then the sentence is segmented into word sequences using a Hidden Markov Model-based segmenter. Finally, the same Support Vector Machine-based chunker is trained to correct the errors made by the segmenter. Our method which simply uses a forward and backward Maximum Matching algorithm, achieved better results than theirs when complicated statistics-based models were involved. On the other hand, compare to the results obtained by [Zhang et al. 2003 ], we only obtained better results for two datasets and worse results for the other two datasets. They used hierarchical Hidden Markov Models to segment and POS tag the text. Although it was a closed test, they used extra information, such as class-based segmentation and role-based tagging models [Zhang et al. 2002] , which gave better results for unknown word recognition. The bottom of Figure 4 shows the results of unknown word detection. Again, our method performed comparatively well in detecting unknown words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 409, |
|
"text": "[Sproat and Emerson 2003]", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2007, |
|
"end": 2031, |
|
"text": "[Sproat and Emerson 2003", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 2536, |
|
"end": 2557, |
|
"text": "[Asahara et al. 2003]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 3212, |
|
"end": 3230, |
|
"text": "[Zhang et al. 2003", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 3529, |
|
"end": 3548, |
|
"text": "[Zhang et al. 2002]", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1026, |
|
"end": 1033, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1949, |
|
"end": 1956, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 2227, |
|
"end": 2234, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 2390, |
|
"end": 2398, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 3621, |
|
"end": 3629, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment with SIGHAN Bakeoff Data", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Regarding Chinese word segmentation problem as character tagging problem has previously been seen in [Xue and Converse 2002] . The difference in our method is that we supply FMM and BMM outputs as a control for the final output decision. However, only words from half of the training corpus are controlled. Since false unknown words are the main cause of errors with known words, our method tries to maintain accuracy for known words while at the same time detecting new words. As Xue and Converse [2002] used a different corpus than ours, namely, the Penn Chinese Treebank, it is difficult to make a fair comparison. They also participated in the bakeoff for the HK and AS tracks only [Xue and Shen 2003 ]. They obtained segmentation F-measures of 91.6 and 95.9, respectively, while we achieved 93.7 and 95.9, which are quite comparable. They did a bit better in unknown word recall, achieving 67.0% and 72.9% recall rates, whereas ours were 65.5% and 69.0%. On the other hand, we obtained much better results in known word recall, 97.7% and 97.6%, compared to their recall rates of 93.6% and 96.6%. Usually a piece of text contains more known words than unknown words; therefore our method, which controls the outputs of known words, is a correct choice. Furthermore, our method can also detect unknown words with comparable results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 124, |
|
"text": "[Xue and Converse 2002]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 481, |
|
"end": 504, |
|
"text": "Xue and Converse [2002]", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 704, |
|
"text": "[Xue and Shen 2003", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 4. Comparision of bakeoff results (overall F-measure and unknown word recall)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In conclusion, our results did not surpass the best results in the bakeoff for all datasets. However, our method is simpler. We only need a dictionary that can be created from a segmented corpus, FMM and BMM modules, and a classifier, without the use of human knowledge. We can get quite comparable results for both known words and unknown words. The results are worse when the training corpus is small and there exist a lot of unknown words, such as in CHTB testing data. Therefore, we still need to investigate the relationship between the size of the training corpora and the proportion of the corpora used to create the dictionaries in the training for solving ambiguity problems and performing unknown word detection. We are also looking into the possibility of designing an ideal model, where optimal results for known words, as in Experiment 2, and unknown words, as in Experiment 4, can be obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Figure 4. Comparision of bakeoff results (overall F-measure and unknown word recall)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our proposed method generated better results than the baseline models, namely, FMM and BMM. We achieved nearly 99% recall when unknown words did not exist. However, in the real world, unknown words always exist in texts, even if we use a very large dictionary. Therefore, we also embed a model to detect unknown words. Unfortunately, while the accuracy achieved in unknown word detection increases, the performance in solving the known word ambiguity problem declines. As shown by the experiments on the bakeoff data, our model works well only when the training corpus is large. In conclusion, while our model is suitable for solving the segmentation ambiguity problem, it can also perform unknown word detection at the same time. However we still need to find a balance that will enable us to solve these two problems optimally. We also need to research the relationship between the training corpus size and the best proportion of the corpus used to create the dictionary for training to solve the ambiguity problem and perform unknown word detection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5." |
|
}, |
|
{ |
|
"text": "Institute of Computational Linguistics, Peking University, http://www.icl.pku.edu.cn/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It is possible to create unknown word phenomena in a training corpus by collecting all the words from the corpus but dropping some words like compounds, proper names, numbers etc. However, since we assume that out target corpus is only a segmented corpus, without other information like POS tags, it is difficult to determine what words that should be dropped and be treated as unknown words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the size of the training data is too big for the AS dataset, we had difficulty training the SVM as the time required was extremely long. Therefore, we divided it into five classifiers and finally combined the results through simple voting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Thanks go to Mr. Kudo for his Support Vector Machine-based chunker tool, Yamcha. We also thank Peking University and SIGHAN for providing the corpora used in our experiments. Finally, we thank the reviewers for their invaluable and insightful comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Combining Segmenter and Chunker for Chinese Word Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Goh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "144--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asahara, M., C.L. Goh, X.J. Wang and Y. Matsumoto, \"Combining Segmenter and Chunker for Chinese Word Segmentation,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 144-147.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Unknown Word Detection for Chinese By a Corpus-based Learning Method", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Bai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of ROCLING X", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--174", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, K.J. and M.H. Bai, \"Unknown Word Detection for Chinese By a Corpus-based Learning Method,\" In Proceedings of ROCLING X, 1997, pp. 159-174.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Unknown Word Extraction for Chinese Documents", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of COLING 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "169--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, K.J. and W.Y. Ma, \"Unknown Word Extraction for Chinese Documents,\" In Proceedings of COLING 2002, 2002, pp. 169-175.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "An Integrated Approach for Chinese Word Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Luke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of PACLIC 17", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fu, G.H. and K.K. Luke, \"An Integrated Approach for Chinese Word Segmentation,\" In Proceedings of PACLIC 17, 2003, pp. 80-87.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Unsupervised Chinese Word Segmentation and Unknown Word Identification", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of NLPRS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fu, G.H. and X.L. Wang, \"Unsupervised Chinese Word Segmentation and Unknown Word Identification,\" In Proceedings of NLPRS, 1999, pp. 32-37.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Segmentation Problem in Chinese Processing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Applied Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "72--78", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, C.N., \"Segmentation Problem in Chinese Processing,\" Applied Linguistics, 1, 1997, pp. 72-78.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Chunking with Support Vector Machines", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kudo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--199", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kudo, T. and Y. Matsumoto, \"Chunking with Support Vector Machines,\" In Proceedings of NAACL, 2001, pp. 192-199.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-Based Likelihood Ratio", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceeding of ICCPOL '99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lai, Y.S. and C.H. Wu, \"Unknown Word and Phrase Extraction Using a Phrase-Like-Unit-Based Likelihood Ratio,\" In Proceeding of ICCPOL '99, 1999, pp. 5-9.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Unsupervised Training for Overlapping Ambiguity Resolution in Chinese Word Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li, M., J.F. Gao, C.N. Huang and J.F. Li, \"Unsupervised Training for Overlapping Ambiguity Resolution in Chinese Word Segmentation,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 1-7.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Covering Ambiguity Resolution in Chinese Word Segmentation Based on Contextual Information", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Tsou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of COLING 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "598--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luo, X., M.S. Sun and B. K. Tsou, \"Covering Ambiguity Resolution in Chinese Word Segmentation Based on Contextual Information,\" In Proceedings of COLING 2002, 2002, pp. 598-604.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A Bottom-up Merging Algorithm for Chinese Unknown Word Extraction", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "31--38", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma, W.Y. and K.J. Chen, \"A Bottom-up Merging Algorithm for Chinese Unknown Word Extraction,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pages 31-38.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Nie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-L", |
|
"middle": [], |
|
"last": "Hannan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of COLIPS", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "47--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nie, J.Y., M.-L. Hannan and W.Y. Jin, \"Unknown Word Detection and Segmentation of Chinese Using Statistical and Heuristic Knowledge,\" Communications of COLIPS, 5, 1995, pp. 47-57.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Representing Text Chunks", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Sang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": ".-T", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Veenstra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of EACL '99", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sang, E. F.-T.K. and J. Veenstra, \"Representing Text Chunks,\" In Proceedings of EACL '99, 1999, pp. 173-179.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The application & implementation of local statistics in Chinese unknown word identification", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Communications of COLIPS", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "119--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shen, D.Y., M.S. Sun, and C.N. Huang, \"The application & implementation of local statistics in Chinese unknown word identification,\" Communications of COLIPS, 8(1), 1998, pp. 119-128.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "The First International Chinese Word Segmentation Bakeoff", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sproat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Emerson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "133--143", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sproat, R. and T. Emerson, \"The First International Chinese Word Segmentation Bakeoff,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 133-143.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Named Entity Extraction Based on A Maximum Entropy Model and Transformational Rules", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Uchimoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Murata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ozaku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Isahara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Processing of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "326--335", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Uchimoto, K., Q. Ma, M. Murata, H. Ozaku and H. Isahara, \"Named Entity Extraction Based on A Maximum Entropy Model and Transformational Rules,\" In Processing of the ACL 2000, 2000, pp. 326-335.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The Nature of Statistical Learning Theory", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vapnik, V. N., The Nature of Statistical Learning Theory, Springer, 1995.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Combining Classifiers for Chinese Word Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Converse", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of First SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xue, N.W. and S. P. Converse, \"Combining Classifiers for Chinese Word Segmentation,\" In Proceedings of First SIGHAN Workshop on Chinese Language Processing, 2002, pp. 57-63.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Chinese Word Segmentation as LMR Tagging", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "176--179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xue, N.W. and L.B. Shen, \"Chinese Word Segmentation as LMR Tagging,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 176-179.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Automatic Recognition of Chinese Unknown Words Based on Roles Tagging", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [ |
|
"Q" |
|
], |
|
"last": "Cheng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of First SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "71--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, H.P., Q. Liu, H. Zhang and X.Q. Cheng, \"Automatic Recognition of Chinese Unknown Words Based on Roles Tagging,\" In Proceedings of First SIGHAN Workshop on Chinese Language Processing, 2002, pp. 71-77.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "HHMM-based Chinese Lexical Analyzer ICTCLAS", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of Second SIGHAN Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "184--187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhang, H.P., H.K. Yu, D.Y. Xiong and Q. Liu, \"HHMM-based Chinese Lexical Analyzer ICTCLAS,\" In Proceedings of Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 184-187.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Detection of Unknown Chinese Words Using a Hybrid Approach", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Lua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computer Processing of Oriental Language", |
|
"volume": "11", |
|
"issue": "1", |
|
"pages": "63--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou, G.D. and K.T. Lua, \"Detection of Unknown Chinese Words Using a Hybrid Approach,\" Computer Processing of Oriental Language, 11(1), 1997, pp. 63-75.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "1a)\u80e1/\u4e16\u5e86/\u4e00\u5bb6/\u4e09/\u53e3/ Hu/ Shiqing/ whole family/ three/ member (All three members of Hu Shiqing's family) (1b)\u5728/\u5df4\uf989/\u4e00/\u5bb6/\u6742\u5fd7/\u4e0a/ in/ Paris/ one/ company/ magazine/ at/ (At one magazine company in Paris)", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "An illustration of classification process applied to \"At the New Year gathering party\"", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"text": "Let, O f = Output of FMM, O b = Output of BMM, Ans = Correct answer, Out = Output from our system.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "Accuracy of segmentation (F-measure), OOV (Recall) and IV (Recall)", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">Tag Description</td></tr><tr><td>S</td><td>one-character word</td></tr><tr><td>B</td><td>first character in a multi-character word</td></tr><tr><td>I</td><td>intermediate character in a multi-character word (for words longer than two characters)</td></tr><tr><td>E</td><td>last character in a multi-character word</td></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>Category</td><td>Conditions</td><td>No. of Char.</td><td>Percentage</td></tr><tr><td colspan=\"2\">Allcorrect O f = O b =Ans =Out</td><td>330220</td><td>96.35%</td></tr><tr><td>Correct</td><td>O f \u2260 O b and Ans = Out</td><td>7663</td><td>2.23%</td></tr><tr><td>Wrong</td><td>O f \u2260 O b and Ans \u2260 Out</td><td>658</td><td>0.19%</td></tr><tr><td>Match</td><td>O f = O b and O f \u2260 Ans and Ans =Out</td><td>1876</td><td>0.55%</td></tr><tr><td colspan=\"2\">Mismatch O f = O b and O f \u2260 Ans and Ans \u2260 Out</td><td>1738</td><td>0.51%</td></tr><tr><td>Allwrong</td><td>O f = O b = Ans and Ans \u2260 Out</td><td>571</td><td>0.17%</td></tr><tr><td>Total</td><td/><td>342726</td><td>100.00%</td></tr></table>" |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td/><td>FMM</td><td>BMM</td><td>SVM</td><td>FMM</td><td>BMM</td><td>FMM+BMM+SVM</td></tr><tr><td/><td/><td/><td>(char. only)</td><td>+SVM</td><td>+SVM</td><td>(=Experiment 1)</td></tr><tr><td>Recall</td><td>96.9</td><td>97.1</td><td>94.0</td><td>98.7</td><td>98.7</td><td>98.9</td></tr><tr><td>Precision</td><td>97.7</td><td>97.9</td><td>94.3</td><td>98.9</td><td>99.0</td><td>99.1</td></tr><tr><td>F-measure</td><td>97.3</td><td>97.5</td><td>94.1</td><td>98.8</td><td>98.9</td><td>99.0</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"3\">Experiment 1 Experiment 2</td><td colspan=\"2\">Experiment 3</td><td/><td>Experiment 4</td></tr><tr><td>Set 1(%)/</td><td/><td>80/0</td><td>60/20</td><td>40/40</td><td>20/60</td><td>0/80</td></tr><tr><td>Set 2(%)</td><td/><td/><td/><td/><td/><td/></tr><tr><td># of words in Dict.</td><td>62,030</td><td colspan=\"4\">49,433 41,582 33,355 22,363</td><td>0</td></tr><tr><td># of unk-words in</td><td>0</td><td colspan=\"4\">0 10,927 25,297 53,353</td><td>All</td></tr><tr><td>Set 2</td><td/><td/><td/><td/><td/><td/></tr><tr><td># of unk-words in</td><td>0</td><td>8,346</td><td colspan=\"3\">9,768 11,924 17,115</td><td>All</td></tr><tr><td>Test(Set 3)</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Recall</td><td>98.9</td><td>95.3</td><td>95.8</td><td>95.7</td><td>95.2</td><td>94.0</td></tr><tr><td>Precision</td><td>99.1</td><td>90.7</td><td>93.5</td><td>94.5</td><td>94.7</td><td>94.3</td></tr><tr><td>F-measure</td><td>99.0</td><td>92.9</td><td>94.7</td><td>95.1</td><td>94.9</td><td>94.1</td></tr><tr><td>OOV(recall)</td><td>-</td><td>8.0</td><td>41.2</td><td>54.9</td><td>63.3</td><td>69.3</td></tr><tr><td>IV(recall)</td><td>98.9</td><td>98.9</td><td>98.1</td><td>97.4</td><td>96.5</td><td>95.0</td></tr><tr><td>The bottom part of</td><td/><td/><td/><td/><td/><td/></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td colspan=\"2\">. Bakeoff data</td><td/><td/><td/><td/></tr><tr><td>Corpus</td><td># of train words</td><td># of test words</td><td>Unknown word rate</td><td>Size of original dictionary</td><td>Size of dictionary used</td></tr><tr><td>PKU</td><td>1.1M</td><td>17,194</td><td>6.9%</td><td>55,226</td><td>36,830</td></tr><tr><td>CHTB</td><td>250K</td><td>39,922</td><td>18.1%</td><td>19,730</td><td>12,274</td></tr><tr><td>AS</td><td>5.8M</td><td>11,985</td><td>2.2%</td><td>146,226</td><td>100,161</td></tr><tr><td>HK</td><td>240K</td><td>34,955</td><td>7.1%</td><td>23,747</td><td>17,207</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>Corpus</td><td>Recall</td><td>Precision</td><td>F-measure</td><td>Recall unknown</td><td>Recall known</td></tr><tr><td>PKU</td><td>95.5</td><td>94.1</td><td>94.7</td><td>71.0</td><td>97.3</td></tr><tr><td>CHTB</td><td>86.0</td><td>83.5</td><td>84.7</td><td>57.7</td><td>92.2</td></tr><tr><td>HK</td><td>95.4</td><td>92.1</td><td>93.7</td><td>65.5</td><td>97.7</td></tr><tr><td>AS</td><td>97.0</td><td>94.8</td><td>95.9</td><td>69.0</td><td>97.6</td></tr></table>" |
|
} |
|
} |
|
} |
|
} |