Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W06-0140",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:03:18.431687Z"
},
"title": "Chinese Named Entity Recognition with a Multi-Phase Model",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Junsheng",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "He",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Dai",
"middle": [],
"last": "Xinyu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chen",
"middle": [],
"last": "Jiajun",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Chinese named entity recognition is one of the difficult and challenging tasks of NLP. In this paper, we present a Chinese named entity recognition system using a multi-phase model. First, we segment the text with a character-level CRF model. Then we apply three word-level CRF models to the labeling person names, location names and organization names in the segmentation results, respectively. Our systems participated in the NER tests on open and closed tracks of Microsoft Research (MSRA). The actual evaluation results show that our system performs well on both the open tracks and closed tracks.",
"pdf_parse": {
"paper_id": "W06-0140",
"_pdf_hash": "",
"abstract": [
{
"text": "Chinese named entity recognition is one of the difficult and challenging tasks of NLP. In this paper, we present a Chinese named entity recognition system using a multi-phase model. First, we segment the text with a character-level CRF model. Then we apply three word-level CRF models to the labeling person names, location names and organization names in the segmentation results, respectively. Our systems participated in the NER tests on open and closed tracks of Microsoft Research (MSRA). The actual evaluation results show that our system performs well on both the open tracks and closed tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER) is a fundamental component for many NLP applications, such as Information extraction, text Summarization, machine translation and so forth. In recent years, much attention has been focused on the problem of recognition of Chinese named entities. The problem of Chinese named entity recognition is difficult and challenging, In addition to the challenging difficulties existing in the counterpart problem in English, this problem also exhibits the following more difficulties: (1) In a Chinese document, the names do not have \"boundary tokens\" such as the capitalized initial letters for a person name in an English document. (2) There is no space between words in Chinese text, so we have to segment the text before NER is performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we report a Chinese named entity recognition system using a multi-phase model which includes a basic segmentation phase and three named entity recognition phases. In our system, the implementations of basic segmentation components and named entity recognition component are both based on conditional random fields (CRFs) (Lafferty et al., 2001) . At last, we apply the rule method to recognize some simple and short location names and organization names in the text. We will describe each of these phases in more details below.",
"cite_spans": [
{
"start": 336,
"end": 359,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Chinese NER with multi-level models",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The input to the recognition algorithm is Chinese character sequence that is not segmented and the output is recognized entity names. The process of recognition of Chinese NER is illustrated in figure 1. First, we segment the text with a characterlevel CRF model. After basic segmentation, a small number of named entities in the text, such as \"\u5c71\u897f\u961f\", \"\u65b0\u534e\u793e\"\uff0c\"\u798f\u5efa\u7701\" and so on, which are segmented as a single word. These simple single-word entities will be labeled with some rules in the last phase. However, a great number of named entities in the text, such as \"\u4e2d \u56fd\u7eff\u8272\u7167\u660e\u5de5\u7a0b\u529e\u516c\u5ba4\", \"\u897f\u67cf\u5761\u7eaa\u5ff5\u9986\", are not yet segmented as a single word. Then, different from (Andrew et al. 2003) , we apply three trained CRFs models with carefully designed and selected features to label person names, location names and organization names in the segmentation results, respectively. At last phase, we apply some rules to tag some names not recognized by CRFs models, and adjust part of the organization names recognized by CRFs models.",
"cite_spans": [
{
"start": 647,
"end": 667,
"text": "(Andrew et al. 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recognition Process",
"sec_num": "2.1"
},
{
"text": "We implemented the basic segmentation component with linear chain structure CRFs. CRFs are undirected graphical models that encode a conditional probability distribution using a given set of features. In the special case in which the designated output nodes of the graphical model are linked by edges in a linear chain, CRFs make a first-order Markov independence assumption among output nodes, and thus correspond to finite state machines (FSMs). CRFs define the conditional probability of a state sequence given an input sequence as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word segmentation",
"sec_num": "2.2"
},
{
"text": "\u239f \u23a0 \u239e \u239c \u239d \u239b = \u2211\u2211 = = \u2212 \u0391 T t K k t t k k o t o s s f Z o s P 1 1 1 ) , , , ( exp 1 ) | ( \u03bb Where ) , , , ( 1 t o s s f t t k \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word segmentation",
"sec_num": "2.2"
},
{
"text": "is an arbitrary feature function over its arguments, and \u03bb k is a learned weight for each feature function. Based on CRFs model, we cast the segmentation problem as a sequence tagging problem. Different from (Peng et al., 2004) , we represent the positions of a hanzi (Chinese character) with four different tags: B for a hanzi that starts a word, I for a hanzi that continues the word, F for a hanzi that ends the word, S for a hanzi that occurs as a single-character word. The basic segmentation is a process of labeling each hanzi with a tag given the features derived from its surrounding context. The features used in our experiment can be broken into two categories: character features and word features. The character features are instantiations of the following templates, similar to those described in (Ng and Jin, 2004) ",
"cite_spans": [
{
"start": 208,
"end": 227,
"text": "(Peng et al., 2004)",
"ref_id": "BIBREF4"
},
{
"start": 811,
"end": 829,
"text": "(Ng and Jin, 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word segmentation",
"sec_num": "2.2"
},
{
"text": ", C refers to a Chinese hanzi. (a) Cn (n = \u22122,\u22121,0,1,2 ) (b) CnCn+1( n = \u22122,\u22121,0,1) (c) C\u22121C1 (d) Pu(C0 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word segmentation",
"sec_num": "2.2"
},
{
"text": "In addition to the character features, we came up with another type word context feature which was found very useful in our experiments. The feature captures the relationship between the hanzi and the word which contains the hanzi. For a two-hanzi word, for example, the first hanzi \"\u8fde\" within the word \"\u8fde\u7eed\" will have the feature WC0=TWO_F set to 1, the second hanzi \"\u7eed\" within the same word \"\u8fde\u7eed\" will have the feature WC0=TWO_L set to 1. For the threehanzi word, for example, the first hanzi \"\u68b3 \" within a word \"\u68b3\u5986\u955c\" will have the feature WC0=TRI_F set to 1, the second hanzi \"\u5986 \" within the same word \"\u68b3\u5986\u955c\" will have the feature WC0=TRI_M set to 1, and the last hanzi \"\u955c\" within the same word \"\u68b3\u5986\u955c\" will have the feature WC0=TRI_L set to 1. Similarly, the feature can be extended to a four-hanzi word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word segmentation",
"sec_num": "2.2"
},
{
"text": "After basic segmentation, we use three wordlevel CRFs models to label person names, location names and organization names, respectively. The important factor in applying CRFs model to name entity recognition is how to select the proper features set. Most of entity names do not have any common structural characteristics except for containing some feature words, such as \"\u516c\u53f8\", \"\u5b66\u6821\", \"\u4e61\" , \"\u9547\" and so on. In addition, for person names, most names include a common surname, e.g. \" \u5f20 \", \" \u738b \". But as a proper noun, the occurrence of an entity name has the specific context. In this section, we only present our approach to organization name recognition. For example, the context information of organization name mainly includes the boundary words and some title words (e.g. \u5c40\u957f\u3001\u8463\u4e8b\u957f). By analyzing a large amount of entity name corpora, we find that the indicative intensity of different boundary words vary greatly. So we divide the left and right boundary words into two classes according to the indicative intensity. Accordingly we construct the four boundary words lexicons. To solve the problem of the selection and classification of boundary words, we make use of mutual Information I(x, y) . If there is a genuine association between x and y, then I(x, y) >>0. If there is no interesting relationship be- tween x and y, then I(x, y)\u22480. If x and y are in complementary distribution, then I(x, y) << 0. By using mutual information, we compute the association between boundary word and the type of organization name, then select and classify the boundary words. Some example boundary words for organization names are listed in table 1. Based on the consideration given in preceding section, we constructed a set of atomic feature patterns, listed in table 2. Additionally, we defined a set of conjunctive feature patterns, which could form effective feature conjunctions to express complicated contextual information. ",
"cite_spans": [],
"ref_spans": [
{
"start": 1173,
"end": 1192,
"text": "Information I(x, y)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Named entity tagging with CRFs",
"sec_num": "2.3"
},
{
"text": "There exists some single-word named entities that aren't tagged by CRFs models. We recognize these single-word named entities with some rules. We first construct two known location names and organization names dictionaries and two feature words lists for location names and organization names. In closed track, we collect known location names and organization names only from training corpus. The recognition process is described below. For each word in the text, we first check whether it is a known location or organization names according to the known loca-tion names and organization names dictionaries. If it isn't a known name, then we further check whether it is a known word. If it is not a known word also, we next check whether the word ends with a feature word of location or organization names. If it is, we label it as a location or organization name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing with rules",
"sec_num": "2.4"
},
{
"text": "In addition, we introduce some rules to adjust organization names recognized by CRF model based on the labeling specification of MRSA corpus. For example, the string \"\u9633\u57ce\u53bf\u674e\u572a\u5854 \u4e61 \u536b \u751f \u9662 \" is recognized as an organization name, but the string should be divided into two names: a location name (\"\u9633\u57ce\u53bf\") and a organization name (\"\u674e\u572a\u5854\u4e61\u536b\u751f\u9662\"), according to label specification, so we add some rules to adjust it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Processing with rules",
"sec_num": "2.4"
},
{
"text": "We participated in the three GB tracks in the third international Chinese language processing bakeoff: NER msra-closed, NER msra-open and WS msra-open. In the closed track, we constructed all dictionaries only with the words appearing in the training corpus. In the closed track, we didn't use the same feature characters lists for location names and organization names as in the open tracks and we collected the feature characters from the training data in the closed track. We constructed feature characters lists for location names and organization names by the following approach. First, we extract all suffix string for all location names and organization names in the training data and count the occurrence of these suffix strings in all location names and organization names. Second, we check every suffix string to judge whether it is a known word. If a suffix string is not a known word, we discard it. Finally, in the remaining suffix words, we select the frequently used suffix words as the feature characters whose counts are greater than the threshold. We set different thresholds for single-character feature words and multi-character feature words. Similar approaches were taken to the collection of common Chinese surnames in the closed track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3"
},
{
"text": "While making training data for segmentation model, we adopted different tagging methods for organization names in the closed track and in the open track. In the closed track, we regard every organization name, such as \"\u5185\u8499\u53e4\u4eba\u6c11\u51fa\u7248 \u793e\", as a single word. But, in the open track, we segment a long organization name into several words. For example, the organization name \"\u5185 \u8499\u53e4\u4eba\u6c11\u51fa\u7248\u793e\" would be divided into three words: \"\u5185\u8499\u53e4\", \"\u4eba\u6c11\" and \"\u51fa\u7248\u793e\". The different tagging methods at segmentation phase would bring different effect to organization names recognition. The size of training data used in the open tracks is same as the closed tracks. We have not employed any additional training data in the open tracks. Table 3 shows the performance of our systems for NER in the bakeoff. For the separate word segmentation task(WS), the above NER task is performed first. Then we added several additional processing steps on the result of named entity recognition. As we all know, disambiguation problem is one of the key issue in Chinese words segmentation. In this task, some ambiguities were resolved through a ruleset which was automatically constructed based on error driven learning theory. The preconstructed rule-set stored many pseudoambiguity strings and gave their correct segmentations. After analyzing the result of our NER based on CRFs model, we noticed that it presents a high recall on out-of-vocabulary. But at the same time, some characters and words were wrongly combined as new words which caused the losing of the precision of OOV and the recall of IV. To this phenomenon, we adopted an unconditional rule, that if a word, except recognized name entity, was detected as a new word and its length was more than 6 (Chinese Characters), and it should be segmented as several invocabulary words based on the combination of FMM and BMM methods. Table 4 shows the result of our systems for word segmentation in the bakeoff. ",
"cite_spans": [],
"ref_spans": [
{
"start": 700,
"end": 707,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 1843,
"end": 1850,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "3"
},
{
"text": "We have presented our Chinese named entity recognition system with a multi-phase model and its result for Msra_open and mrsa_closed tracks. Our open and closed GB track experiments show that its performance is competitive. We will try to select more useful feature functions into the existing segmentation model and named entity recognition model in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Chinese Word Segmentation Using Minimal Linguistic Knowledge",
"authors": [
{
"first": "Aitao",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aitao Chen. 2003. Chinese Word Segmentation Using Minimal Linguistic Knowledge. In Proceedings of the Second SIGHAN Workshop on Chinese Lan- guage Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Early Results for Named Entity Recognition with Conditional Random Fields, Feature Induction and Web-Enhanced Lexicons",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh CoNLL conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew McCallum, Wei Li. 2003. Early Results for Named Entity Recognition with Conditional Ran- dom Fields, Feature Induction and Web-Enhanced Lexicons. Proceedings of the Seventh CoNLL con- ference, Edmonton,",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ICML 01",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML 01.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chinese Partof-Speech Taging: One-at-a-Time or All at Once? Word-based or Character based?",
"authors": [
{
"first": "Hwee",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Jin",
"middle": [
"Kiat"
],
"last": "Tou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Low",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, Hwee Tou and Jin Kiat Low. 2004. Chinese Part- of-Speech Taging: One-at-a-Time or All at Once? Word-based or Character based? In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese Segmentation and New Word Detection using Conditional Random Fields",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Fangfang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Twentith International Conference on Computaional Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng, Fuchun, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection using Conditional Random Fields . In Proceedings of the Twentith International Con- ference on Computaional Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "The classified boundary words for ORG names"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Atomic pattern</td><td>Meaning of pattern</td></tr><tr><td>CurWord</td><td>Current word</td></tr><tr><td>LocationName</td><td>Check if current word is a location</td></tr><tr><td/><td>name</td></tr><tr><td>PersonName</td><td>Check if current word is a person</td></tr><tr><td/><td>name</td></tr><tr><td>KnownORG</td><td>Check if current word is a known</td></tr><tr><td/><td>organization name</td></tr><tr><td>ORGFeature</td><td>Check if current word is a feature</td></tr><tr><td/><td>word of ORG name</td></tr><tr><td>ScanFeatureWord_8</td><td>Check if there exist a feature word</td></tr><tr><td/><td>among eight words behind the</td></tr><tr><td/><td>current word</td></tr><tr><td>LeftBoundary1_-2</td><td>Check if there exist a first-class or</td></tr><tr><td>LeftBoundary2_-2</td><td>second-class left boundary word</td></tr><tr><td/><td>among two words before the cur-</td></tr><tr><td/><td>rent word</td></tr><tr><td>RightBoundary1_+2</td><td>Check if there exist a first-class or</td></tr><tr><td>RightBoundary2_+2</td><td>second-class right boundary word</td></tr><tr><td/><td>among two words behind the cur-</td></tr><tr><td/><td>rent word</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Atomic feature patterns for ORG names"
},
"TABREF4": {
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">: Named entity recognition outcome</td></tr><tr><td>Track</td><td>P</td><td>R</td><td>F Per-F Loc-F Org-F</td></tr><tr><td>NER msra closed</td><td colspan=\"3\">88.94 84.20 86.51 90.09 85.45 83.10</td></tr><tr><td>NER msra open</td><td colspan=\"3\">90.76 89.22 89.99 92.61 90.99 83.97</td></tr></table>",
"num": null,
"type_str": "table",
"text": ""
},
"TABREF5": {
"html": null,
"content": "<table><tr><td>Track</td><td>P</td><td>R</td><td>F</td><td colspan=\"2\">OOV-R IV-R</td></tr><tr><td>WS msra open</td><td colspan=\"4\">0.975 0.976 0.975 0.811</td><td>0.981</td></tr></table>",
"num": null,
"type_str": "table",
"text": "Word segmentation outcome"
}
}
}
}