Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W06-0128",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:00:40.117564Z"
},
"title": "Chinese Word Segmentation using Various Dictionaries",
"authors": [
{
"first": "Guo-Wei",
"middle": [],
"last": "Bian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Huafan University",
"location": {
"country": "Taiwan, R.O.C"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most of the Chinese word segmentation systems utilizes monolingual dictionary and are used for monolingual processing. For the tasks of machine translation (MT) and cross-language information retrieval (CLIR), another translation dictionary may be used to transfer the words of documents from the source languages to target languages. The inconsistencies resulting from the two types of dictionaries (segmentation dictionary and transfer dictionary) may produce some problems for MT and CLIR. This paper shows the effectiveness of the external resources (bilingual dictionary and word list) for Chinese word segmentations.",
"pdf_parse": {
"paper_id": "W06-0128",
"_pdf_hash": "",
"abstract": [
{
"text": "Most of the Chinese word segmentation systems utilizes monolingual dictionary and are used for monolingual processing. For the tasks of machine translation (MT) and cross-language information retrieval (CLIR), another translation dictionary may be used to transfer the words of documents from the source languages to target languages. The inconsistencies resulting from the two types of dictionaries (segmentation dictionary and transfer dictionary) may produce some problems for MT and CLIR. This paper shows the effectiveness of the external resources (bilingual dictionary and word list) for Chinese word segmentations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most of the Chinese word segmentations are used for monolingual processing. In general, the word segmentation program utilizes the word entries, part-of-speech (POS) information (Chen and Liu, 1992) in a monolingual dictionary, segmentation rules (Palmer, 1997) , and some statistical information (Sproat, et al., 1994) . For the tasks of machine translation (MT) (Bian and Chen, 1998) and cross-language information retrieval (CLIR) (Bian and Chen, 2000) , another translation dictionary may be used to transfer the words of documents from the source languages to target languages. Because of the inconsistencies resulting from the two types of dictionaries (segmentation dictionary and transfer dictionary), this approach has the problems that some segmented words cannot be found in the transfer dictionary.",
"cite_spans": [
{
"start": 178,
"end": 198,
"text": "(Chen and Liu, 1992)",
"ref_id": "BIBREF2"
},
{
"start": 247,
"end": 261,
"text": "(Palmer, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 297,
"end": 319,
"text": "(Sproat, et al., 1994)",
"ref_id": "BIBREF4"
},
{
"start": 364,
"end": 385,
"text": "(Bian and Chen, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 434,
"end": 455,
"text": "(Bian and Chen, 2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on the effectiveness of the Chinese word segmentation using different dictionaries. Four different dictionaries (or word lists) and two different testing collections (testing data) are used to evaluate the results of the Chinese word segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The segmentation system used only the various dictionaries in this design. In this paper, the other possible resources (POS, segmentation rules, word segmentation guide, and statistical information) are ignored to test the average performance between different testing collections specially followed the different segmented guidelines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Word Segmentation System",
"sec_num": "2"
},
{
"text": "The longest-matching method is adopted in this Chinese segmentation system. The segmentation processing searches for a dictionary entry corresponding to the longest sequence of Chinese characters from left to right. The system provided the approximate matching to search a substring of the input with the entry in the dictionary if no total matching is found. For example, the system will segment the input \" \" as \" \" which matched the term with the entry \" \" in dictionary if no entry \" \" found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Word Segmentation System",
"sec_num": "2"
},
{
"text": "The word segmentation are evaluated using different dictionaries (or word lists) and different testing collections (testing data). There are four dictionaries are used: the first one is converted from an English-Chinese bilingual dictionary, and the other three are extracted from the training corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Various Dictionaries",
"sec_num": "2.1"
},
{
"text": "The original English-Chinese dictionary (Bian and Chen, 1998) , which containing about 67,000 English word entries, is converted to a new Chinese-English dictionary (called CEDIC later). There are 125,719 Chinese word entries in this CEDIC.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Bian and Chen, 1998)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Various Dictionaries",
"sec_num": "2.1"
},
{
"text": "The terms in the various training corpora (the Sinica Corpus and the City University Corpus) are extracted to build the different word lists as the segmentation dictionaries (called CKIP and CityU later). The tokens starting with the special characters or punctuation marks are ignored. The following shows some examples: , , , , , , , , \u2500, \u2500 , , , \u25cb\u25cb\u25cb, \u2026, , , , , , , .com, Table 1 lists the number of tokens (#tokens), the number of ignored tokens (#ignored), the number of words (#words), and the unique words (#unique) for each dictionaries. There are 140,971 unique words are extracted from the training collection of Sinica Corpus, and 75,433 respected to the training set of the City University Corpus. These two dictionaries are combined to another dictionary which containing 174,398 unique words. ",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 347,
"text": "The following shows some examples: , , , , , , , , \u2500, \u2500",
"ref_id": null
},
{
"start": 380,
"end": 387,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Various Dictionaries",
"sec_num": "2.1"
},
{
"text": "To evaluate the results of Chinese word segmentations, we implement 8 experiments (runs) using the 4 different dictionaries (CEDIC, CK, CT, and CK+CT) mentioned in previous section. Two test collections (the Sinica Corpus and the City University Corpus) are used to measure the precision, recall, and an evenly-weighted Fmeasure for the Chinese words segmentations. Table 2 shows the F-measure of the experimental results, and the Figure 1 illustrates the comparisons of the segmentation performances. The symbol (*) indicates that the run is a closed test, which only uses the training material from the training data for the particular corpus. We can find that the larger dictionary (CK+CT) produces better segmentation results even the word lists are combined from the different resources (corpora) and followed the different guidelines of word segmentations. ",
"cite_spans": [],
"ref_spans": [
{
"start": 366,
"end": 373,
"text": "Table 2",
"ref_id": null
},
{
"start": 431,
"end": 439,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "The results file for word segmentation is required to appear with one line for each sentence/line in the test file with words and punctuation separated by whitespace. Our system makes some mistakes to produce no whitespace before English terms and Arabic numbers, and produce no whitespace after Chinese punctuation marks. This formatting problem has made many adjacent segmented words to be evaluated as errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Format Error of Result File",
"sec_num": "3.1.1"
},
{
"text": "A sentence with such errors is listed below",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Format Error of Result File",
"sec_num": "3.1.1"
},
{
"text": "(Our Answer) \ufa00 \uf96d \ufa08 (Standard) \ufa00 9 \uf96d \ufa08 9+2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Format Error of Result File",
"sec_num": "3.1.1"
},
{
"text": "The standard answer of the testing collection (CityU) of the City University Corpus has 7,512 sentences and 220,147 words. The total number of English terms, Arabic numbers, and Chinese punctuation marks is 37,644. Such formatting problem makes the error rate of about 30% for the City University Corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Format Error of Result File",
"sec_num": "3.1.1"
},
{
"text": "In our experiments, there are different word lists extracted from the different training corpora. Some errors are produced because of the differ-ent results of word segmentations in the training corpora according to the different guidelines. Table 3 shows some different results. The first column (CKIP) is the standard answer of the testing collection of Sinica Corpus, and the second column (HFUIM) is our answer. The third and fourth columns are the words with their frequencies appeared in the training collections of Sinica Corpus and City University Corpus. For example, our system produces the word \"",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 249,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Different Viewpoints of Segmentations",
"sec_num": "3.1.2"
},
{
"text": "\", but the standard answer of Sinica Corpus is \" \" and \" \". However, the word \" \" appear 61 times in the training collection of City University Corpus. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Viewpoints of Segmentations",
"sec_num": "3.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "CKIP HFUIM CKIP-Training CityU-Training \uf9f4 \uf9f4 \uf9f4 (0) (1839)",
"eq_num": "(366)"
}
],
"section": "Different Viewpoints of Segmentations",
"sec_num": "3.1.2"
},
{
"text": "Some errors of word segmentations are reported because of the inconsistency of word segmentations. The following shows such a problem. For example, the word \" \" appears 317 times in the training data, but it has been treated as two terms (\" \" and \" \") 19 times in the golden standard of the testing data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inconsistency of Word Segmentation",
"sec_num": "3.1.3"
},
{
"text": "In this paper, we discuss the effectiveness of the Chinese word segmentation using various dictionaries. In the experimental results, we can find that the larger dictionary will produce better segmentation results even the word lists are combined from the different resources (corpora) and followed the different guidelines of word segmentations. Some results show that the external resource (e.g., the bilingual dictionary) can perform the task of Chinese word segmentation better than the monolingual dictionary which extracted from the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Cross Language Information Access to Multilingual Collections on the Internet",
"authors": [
{
"first": "G",
"middle": [
"W"
],
"last": "Bian",
"suffix": ""
},
{
"first": "H",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of American Society for Information Science & Technology (JASIST), Special Issue on Digital Libraries",
"volume": "51",
"issue": "3",
"pages": "281--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bian, G.W. and Chen, H.H. (2000). \"Cross Language Information Access to Multilingual Collections on the Internet.\" Journal of American Society for In- formation Science & Technology (JASIST), Special Issue on Digital Libraries, 51(3), 2000, 281-296.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Integrating Query Translation and Document Translation in a Cross-Language Information Retrieval System",
"authors": [
{
"first": "G",
"middle": [
"W"
],
"last": "Bian",
"suffix": ""
},
{
"first": "H",
"middle": [
"H"
],
"last": "Chen",
"suffix": ""
}
],
"year": 1998,
"venue": "Machine Translation and the Information Soap (AMTA '98)",
"volume": "1529",
"issue": "",
"pages": "250--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bian, G.W. and Chen, H.H. (1998). \"Integrating Query Translation and Document Translation in a Cross-Language Information Retrieval System.\" Machine Translation and the Information Soap (AMTA '98), D. Farwell, L Gerber, and E. Hovy (Eds.), Lecture Notes in Computer Science, Vol. 1529, Springer-Verlag, pp. 250-265, 1998",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "word identification for Mandarin Chinese sentences",
"authors": [
{
"first": "K",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 14th conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "101--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, K.J and Liu, S.H (1992), \"word identification for Mandarin Chinese sentences\" Proceedings of the 14th conference on Computational linguistics, pp. 101-107, France, 1992",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A trainable rule-based algorithm for word segmentation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceeding of ACL'97",
"volume": "",
"issue": "",
"pages": "321--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, D. (1997), \"A trainable rule-based algorithm for word segmentation\", Proceeding of ACL'97, 321-328, 1997.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Stochastic Finite-State Word-Segmentation Algorithm for Chinese",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceeding of 32 nd Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "66--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat, R., et al. (1994) \"A Stochastic Finite-State Word-Segmentation Algorithm for Chinese\", Pro- ceeding of 32 nd Annual Meeting of ACL, New Mex- ico, pp. 66-73.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The F-measure results of segmentation performances using various dictionaries (*: closed test) The comparison of segmentation performances using various dictionaries (*: close test)",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"text": "The Different Segmentation Results",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}