Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W06-0125",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:02:03.882323Z"
},
"title": "Chinese word segmentation and named entity recognition based on a context-dependent Mutual Information Independence Model",
"authors": [
{
"first": "Zhang",
"middle": [],
"last": "Min",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhou",
"middle": [],
"last": "Guodong",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Lingpeng",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ji",
"middle": [],
"last": "Donghong",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper briefly describes our system in the third SIGHAN bakeoff on Chinese word segmentation and named entity recognition. This is done via a word chunking strategy using a context-dependent Mutual Information Independence Model. Evaluation shows that our system performs well on all the word segmentation closed tracks and achieves very good scalability across different corpora. It also shows that the use of the same strategy in named entity recognition shows promising performance given the fact that we only spend less than three days in total on extending the system in word segmentation to incorporate named entity recognition, including training and formal testing.",
"pdf_parse": {
"paper_id": "W06-0125",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper briefly describes our system in the third SIGHAN bakeoff on Chinese word segmentation and named entity recognition. This is done via a word chunking strategy using a context-dependent Mutual Information Independence Model. Evaluation shows that our system performs well on all the word segmentation closed tracks and achieves very good scalability across different corpora. It also shows that the use of the same strategy in named entity recognition shows promising performance given the fact that we only spend less than three days in total on extending the system in word segmentation to incorporate named entity recognition, including training and formal testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word segmentation and named entity recognition aim at recognizing the implicit word boundaries and proper nouns, such as names of persons, locations and organizations, respectively in plain Chinese text, and are critical in Chinese information processing. However, there exist two problems when developing a practical word segmentation or named entity recognition system for large open applications, i.e. the resolution of ambiguous segmentations and the identification of OOV words or OOV entity names. In order to resolve above problems, we developed a purely statistical Chinese word segmentation system and a named entity recognition system using a three-stage strategy under an unified framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first stage is called known word segmentation, which aims to segment an input sequence of Chinese characters into a sequence of known words (called word atoms in this paper). In this paper, all Chinese characters are regarded as known words and a word unigram model is applied to perform this task for efficiency. Also, for convenience, all the English characters are transformed into the Chinese counterparts in preprocessing, which will be recovered just before outputting results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second stage is the word and/or named entity identification and classification on the sequence of atomic words in the first step. Here, a word chunking strategy is applied to detect words and/or entity names by chunking one or more atomic words together according to the word formation patterns of the word atoms and optional entity name formation patterns for named entity recognition. The problem of word segmentation and/or entity name recognition are re-cast as chunking one or more word atoms together to form a new word and/or entity name, and a discriminative Markov model, named Mutual Information Independence Model (MIIM), is adopted in chunking. Besides, a SVM plus sigmoid model is applied to integrate various types of contexts and implement the discriminative modeling in MIIM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The third step is post processing, which tries to further resolve ambiguous segmentations and unknown word segmentation. Due to time limit, this is only done in Chinese word segmentation. No post processing is done on Chinese named entity recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is as follows: Section 2 describes the context-dependent Mutual Information Independence Model in details while purely statistical post-processing in Chinese word segmentation is presented in Section 3. Finally, we report the results of our system in Chinese word segmentation and named entity recognition in Section 4 and conclude our work in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we use a discriminative Markov model, called Mutual Information Independence Model (MIIM) as proposed by Zhou et al (2002) , for Chinese word segmentation and named entity recognition. MIIM is derived from a conditional probability model. Given an observation sequence ",
"cite_spans": [
{
"start": 120,
"end": 137,
"text": "Zhou et al (2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "\u2211 \u2211 = = \u2212 + = n i n i n i i i n n O s P S s PMI O S P 1 1 2 1 1 1 1 ) | ( log ) , ( ) | ( log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "\u2211 = \u2212 n i i i S s PMI 2 1 1 ) , (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": ", which can be computed by applying ngram modeling, and the output model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "\u2211 = n i n i O s P 1 1 ) | ( log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": ", which can be estimated by any probability-based classifier, such as a maximum entropy classifier or a SVM plus sigmoid classifier (Zhou et al 2006) . In this competition, the SVM plus sigmoid classifier is used in Chinese word segmentation while a simple backoff approach as described in Zhou et al (2002) is used in named entity recognition.",
"cite_spans": [
{
"start": 132,
"end": 149,
"text": "(Zhou et al 2006)",
"ref_id": null
},
{
"start": 290,
"end": 307,
"text": "Zhou et al (2002)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "Here, a variant of the Viterbi algorithm (Viterbi 1967) in decoding the standard Hidden Markov Model (HMM) (Rabiner 1989 ) is implemented to find the most likely state sequence by replacing the state transition model and the output model of the standard HMM with the state transition model and the output model of the MIIM, respectively. The above MIIM has been successfully applied in many applications, such as text chunking (Zhou 2004 For Chinese word segmentation and named entity recognition by chunking, a word or a entity name is regarded as a chunk of one or more word atoms and we have: ",
"cite_spans": [
{
"start": 41,
"end": 55,
"text": "(Viterbi 1967)",
"ref_id": "BIBREF1"
},
{
"start": 107,
"end": 120,
"text": "(Rabiner 1989",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "\u2022 > =< i i i w p o , ; i w is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "\u2211 \u2211 = = \u2212 \u2212 + = n i n i n i i i i i n n O s P p p S s PMI O S P 1 1 2 1 1 1 1 1 ) | ( log ) | , ( ) | ( log 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutual Information Independence Model",
"sec_num": "2"
},
{
"text": "The third step is post processing, which tries to resolve ambiguous segmentations and false unknown word generation raised in the second step. Due to time limit, this is only done in Chinese word segmentation, i.e. no post processing is done on Chinese named entity recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "A simple pattern-based method is employed to capture context information to correct the segmentation errors generated in the second steps. The pattern is designed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "<Ambiguous Entry (AE)> | <Left Context, Right Context> => <Proper Segmentation>",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "The ambiguity entry (AE) means ambiguous segmentations or forced-generated unknown words. We use the 1 st and 2 nd words before AE as the left context and the 1 st and 2 nd words after AE as the right context. To reduce sparseness, we also only use the 1 st left and right words as context. This means that there are two patterns generated for the same context. All the patterns are automatically learned from training corpus using the following algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "LearningPatterns() // Input: training corpus // Output: patterns BEGIN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "(1) Training a MIIM model using training corpus (2) Using the MIIM model to segment training corpus (3) Aligning the training corpus with the segmented training corpus (4) Extracting error segmentations (5) Generating disambiguation patterns using the left and right context (6) Removing the conflicting entries if two patterns have the same left hand side but different right hand side. END",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post Processing in Word Segmentation",
"sec_num": null
},
{
"text": "We first develop our system using the PKU data released in the Second SIGHAN Bakeoff last year. Then, we train and evaluate it on the Third SIGHAN Bakeoff corpora without any fine-tuning. We only carry out our evaluation on the closed tracks. It means that we do not use any additional knowledge beyond the training corpus. Precision (P), Recall (R), F-measure (F), OOV Recall and IV Recall are adopted to measure the performance of word segmentation. Accuracy (A), Precision (P), Recall (R) and F-measure (F) are adopted to measure the performance of NER. Tables 1, 2 and 3 in the next page report the performance of our algorithm on different corpus in the SIGHAN Bakeoff 02 and Bakeoff 03, respectively. For the performance of other systems, please refer to http://sighan.cs.uchicago.edu/bakeoff2005/data/r esults.php.htm for the Chinese bakeoff 2005 and http://sighan.cs.uchicago.edu/bakeoff2006/longst ats.html for the Chinese bakeoff 2006.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "Comparison against other systems shows that our system achieves the state-of-the-art performance on all Chinese word segmentation closed tracks and shows good scalability across different corpora. The small performance gap should be able to overcome by replacing the word unigram model with the more powerful word bigram model. Due to very limited time of less than three days, although our NER system under the unified framework as Chinese word segmentation does not achieve the state-of-the-art, its performance in NER is quite promising and provides a good platform for further improvement. Error analysis reveals that OOV is still an open problem that is far from to resolve. In addition, different corpus defines different segmentation principles. This will stress OOV handling in the extreme. Therefore a system trained on one genre usually performances worse when faced with text from a different register.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "This paper proposes a purely unified statistical three-stage strategy in Chinese word segmentation and named entity recognition, which are based on a context-dependent Mutual Information Independence Model. Evaluation shows that our system achieves the states-of-the-art segmentation performance and provides a good platform for further performance improvement of Chinese NER. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition",
"authors": [
{
"first": "L",
"middle": [],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "IEEE",
"volume": "77",
"issue": "2",
"pages": "257--285",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner L. 1989. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. IEEE 77(2), pages257-285.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Theory, IT",
"volume": "13",
"issue": "2",
"pages": "260--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viterbi A.J. 1967. Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. IEEE Transactions on Information Theory, IT 13(2), 260-269.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Named Entity Recognition Using a HMM-based Chunk Tagger",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Guodong",
"suffix": ""
},
{
"first": "Su",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "473--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou GuoDong and Su Jain. 2002. Named Entity Recognition Using a HMM-based Chunk Tagger, Proceedings of the 40 th Annual Meeting of the Association for Computational Linguistics (ACL'2002). Philadelphia. July 2002. pp473-480.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "), Chinese word segmentation ( Zhou 2005), English named entity recognition in the newswire domain (Zhou et al 2002) and the biomedical domain (Zhou et al 2004; Zhou et al 2006).",
"num": null,
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "We call the above model the Mutual Information Independence Model due to its Pair-wise Mutual Information (PMI) assumption(Zhou et al 2002). The above model consists of two sub-models: the state transition model",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Zhou GuoDong, Zhang Jie, Su Jian, Shen Dan and Tan ChewLim. 2004. Recognizing Names in Biomedical Texts: a Machine Learning Approach. Bioinformatics. 20(7): 1178-1190. DOI: 10.1093/bioinformatics/bth060. 2004. Performance of Word Segmentation on Closed Tracks in the SIGHAN Bakeoff 02",
"num": null,
"content": "<table><tr><td/><td/><td/><td colspan=\"3\">Zhou GuoDong. 2005. A chunking strategy</td></tr><tr><td/><td/><td/><td colspan=\"3\">towards unknown word detection in Chinese</td></tr><tr><td/><td/><td/><td colspan=\"3\">word segmentation. Proceedings of 2 nd</td></tr><tr><td/><td/><td/><td colspan=\"3\">International Joint Conference on Natural</td></tr><tr><td/><td/><td/><td colspan=\"3\">Language Processing (IJCNLP'2005), Lecture</td></tr><tr><td>ISSN: 1460-2059</td><td/><td/><td colspan=\"3\">Notes in Computer Science (LNCS 3651)</td></tr><tr><td colspan=\"3\">Zhou GuoDong. 2004. Discriminative hidden</td><td colspan=\"3\">Zhou GuoDong. 2006. Recognizing names in</td></tr><tr><td colspan=\"3\">Markov modeling with long state dependence</td><td colspan=\"3\">biomedical texts using Mutual Information</td></tr><tr><td colspan=\"3\">using a kNN ensemble. Proceedings of 20th</td><td colspan=\"3\">Independence Model and SVM plus Sigmod.</td></tr><tr><td colspan=\"3\">International Conference on Computational</td><td colspan=\"3\">International Journal of Medical Informatics</td></tr><tr><td colspan=\"3\">Linguistics (COLING'2004). 23-27 Aug, 2004,</td><td colspan=\"3\">(Article in Press). ISSN 1386-5056</td></tr><tr><td>Geneva, Switzerland.</td><td/><td/><td/><td/><td/></tr><tr><td>Tables</td><td/><td/><td/><td/><td/></tr><tr><td>Task</td><td>P</td><td>R</td><td>F</td><td colspan=\"2\">OOV Recall IV Recall</td></tr><tr><td>CityU</td><td>0.9 38</td><td>0.952</td><td>94.5</td><td>0.578</td><td>0.967</td></tr><tr><td>MSRA</td><td>0.952</td><td>0.962</td><td>95.7</td><td>0.51</td><td>0.98</td></tr><tr><td>CKIP</td><td>0.94</td><td>0.957</td><td>94.8</td><td>0.502</td><td>0.976</td></tr><tr><td>PKU</td><td>0.952</td><td>0.952</td><td>95.2</td><td>0.71</td><td>0.967</td></tr><tr><td>Task</td><td>P</td><td>R</td><td>F</td><td colspan=\"2\">OOV Recall IV Recall</td></tr><tr><td>CityU</td><td>0.968</td><td>0.961</td><td>96.5</td><td>0.633</td><td>0.983</td></tr><tr><td>MSRA</td><td>0.961</td><td>0.953</td><td>95.7</td><td>0.499</td><td>0.977</td></tr><tr><td>CKIP</td><td>0.958</td><td>0.941</td><td>94.9</td><td>0.554</td><td>0.976</td></tr><tr><td>UPUC</td><td>0.936</td><td>0.917</td><td>92.6</td><td>0.617</td><td>0.966</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Performance of Word Segmentation on Closed Tracks in the SIGHAN Bakeoff 03",
"num": null,
"content": "<table><tr><td>Task</td><td>A</td><td>P</td><td>R</td><td>F</td></tr><tr><td>MSRA</td><td>0.9743</td><td>0.8150</td><td>0.7882</td><td>79.92</td></tr><tr><td>CityU</td><td>0.9725</td><td>0.8466</td><td>0.8061</td><td>82.59</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Performance of NER on Closed Tracks in the SIGHAN Bakeoff 03",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}