Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:27.953534Z"
},
"title": "KUNLP system using Classification Information Model at SENSEVAL-2",
"authors": [
{
"first": "Hee-Cheol",
"middle": [],
"last": "Seo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rae-Chang Rim Dept. of Computer Science and Engineering",
"location": {
"addrLine": "Sang-Zoo Lee"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The classification information model or CIM classifies instances by considering the discrimination ability of their features, which was proven to be useful for word sense disambiguation at SENSEVAL-1. But the CIM has a problem of information loss. KUNLP system at SENSEVAL-2 uses a modified version of the CIM for word sense disambiguation. We used three types of features for word sense disambiguation: local, topical, and bigram context. Local and topical context are similar to Chodorow's context and refer to only unigram information. The window of a bigram context is similar to that of a local context but a bigram context refers to only bigram information. We participated in the English lexical sample task and the Korean lexical sample task, where our systems ranked high.",
"pdf_parse": {
"paper_id": "S01-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "The classification information model or CIM classifies instances by considering the discrimination ability of their features, which was proven to be useful for word sense disambiguation at SENSEVAL-1. But the CIM has a problem of information loss. KUNLP system at SENSEVAL-2 uses a modified version of the CIM for word sense disambiguation. We used three types of features for word sense disambiguation: local, topical, and bigram context. Local and topical context are similar to Chodorow's context and refer to only unigram information. The window of a bigram context is similar to that of a local context but a bigram context refers to only bigram information. We participated in the English lexical sample task and the Korean lexical sample task, where our systems ranked high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The classification information model (Ho, 1997) is the model that classifies instances by considering the discrimination ability of their features. In the CIM, a feature with high discrimination ability contributes to the classification more than one with low discrimination ability. Hence, we can omit the feature selection procedure.",
"cite_spans": [
{
"start": 37,
"end": 47,
"text": "(Ho, 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The CIM has a kind of information loss problem due to the assumption that a feature contributes to only one class. We devised a modified version of the CIM where a feature can contribute to all classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\\Vord sense disambiguation task can be treated as a kind of classification process (Ho, 2000) . When a classification technique is applied to word sense disambiguation, an instance corresponds to a context containing a polysemous word and its class to the proper sense of the word, and one of its features to a piece of context information. As a classification problem, word sense disambiguation task can be solved by the CIM.",
"cite_spans": [
{
"start": 83,
"end": 93,
"text": "(Ho, 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\\Ve used three types of features for word sense disambiguation: local, topical, and bigram context. Local and topical context are similar to Chodorow's context (Chodorow, 2000) and consist of only uni-135-090 3rd floor, Hanarn BD 157-18 Sarnsung-Dong Kangnarn-Gu, Seoul, Korea [email protected] gram information. A bigram context has a similar window to a local context but consists of only bigram information.",
"cite_spans": [
{
"start": 160,
"end": 176,
"text": "(Chodorow, 2000)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To disambiguate senses, we did two phases: corpus preprocessing and sense disambiguation. Figure 1 shows the flow chart of our system. At the corpus preprocessing phase, we tokenized a corpus and then tagged it with parts-of-speech using Brill's Tagger (Brill, 1994) . The tokenizer just separates symbols from a word. For example, a sentence \"I'm straight, white, no longer middle class, anti-IRA, have ... \" is tokenized to \"I 'm stright , white , no longer middle class , anti -IRA , have ... \". Unlike other symbols, an apostrophe is not separated from the following characters.",
"cite_spans": [
{
"start": 253,
"end": 266,
"text": "(Brill, 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "KUNLP system",
"sec_num": "2"
},
{
"text": "At the phrase filtering phase, we filtered senses using the satellite feature, which is marked with sat tag in training and test corpus given by the task organizer. For example, in a sentence This air of disengagement <head sats= \"carry_over. 067:0\"> carried</head> <sat id= \"carry_over. 067:0\"> over</sat> to his apparent attitude toward his things, carried over is a phrase and also a satellite feature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase filtering",
"sec_num": "2.2"
},
{
"text": "Phrase filtering is applied to sense disambiguation as in Table 1 Table 1: There are satellite features in the English lexical sample, but not in the Korean lexical sample. Hence, phrase filtering was applied only in the English lexical sample task.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 76,
"text": "Table 1 Table 1:",
"ref_id": null
}
],
"eq_spans": [],
"section": "Phrase filtering",
"sec_num": "2.2"
},
{
"text": "The CIM is a kind of classification model based on the entropy theory. Given an input instance, the CIM decides the proper class of the instance by considering individual decisions made by each feature of the instance. In the model, the proper class of an instance,X, is determined by Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "Class(X) ~f arg max Rel(classj, X) (1) class;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "where classJ is the j-th class and Rel(classj, X) is the relevance between the j-th class and the instance X. Here, if we assume that features are independent of each other, the relevance can be defined as in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "Equation 2. m Rel(classj, X)= L x;W;j i=l (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "where m is the size of the feature set, xi is the value of the i-th feature and W;J is the weight of the ith feature for the j-th class. In Equation 2, x; has a binary value (1 if the feature occurs within the window, 0 otherwise) and W;j is defined in terms of classification information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "The classification information of a feature is composed of two components. One is the discrimination score (DS), which represents the discrimination ability of classifying instances. The other is the most probable class (MPC), which represents the most closely related class to the feature. Wij is defined by using these two components as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "~ { DS; Wij ~ Q if classj = MPC; otherwise (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "In Equation 3, DS; and M PCi represent the DS and MPC of the i-th feature, respectively. In the CIM, DS and MPC are defined in terms of the conditional probability of a class given a feature, which is normalized by the corpus size. The normalized conditional probability is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "clef = ( l If) N(class) p C aSSj i N(class;) ';:-'n ( l If) N(class) uk=l p c aSSk i N(classk) p(f;iclassj)",
"eq_num": "(4)"
}
],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "In Equation 4, Pii is a normalized conditional probability, N(classJ) is the number of instances belonging to the j-th class in the training data, N (class) is the average number of instances for each class and n is the number of classes. Given the normalized conditional probability distribution, DSs and MPCs are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "DS; clef MPC; clef log 2 n ~ H(pi) n log2 n + ~ PJi log2 PJi j=l arg max PJi class; arg max p(filclassj) class i (5) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "In Equation 5, H(p;) is the entropy of the i-th feature over the normalized conditional probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Information Model (CIM)",
"sec_num": "2.3"
},
{
"text": "The CIM has a problem caused by using MPCs, which is information loss. For example, let us consider the situation in Table 2 and Table 3. Table 2 shows the normalized conditional probability distribution, DSs and MPCs of features in an instance. if it consults the normalized conditional probability distribution. In the CIM, however, the feature can not distinguish them because their weights have the same value. Another aspect of the problem is that the CIM fails to capture the minor contribution of features, which is crucial in the case where the sum of the minor contribution of features to a non-MPC class dominates that of the major contribution of features to MPC classes. For example, at Table 2, all features, h, fz, and /3, have different MPCs: class1, class3 and class4, respectively. it is also obvious that they have some minor contribution to the class2 . The CIM will classify the instance as class1 because Rel(class 1,X) = 1.1187 is the maximum number among the Rel(classj, X). However, if we consider the minor contribution of all the features, we prefer class2 to class1 because class2 intuitively gains the total contribution more than class1.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 145,
"text": "Table 2 and Table 3. Table 2",
"ref_id": "TABREF0"
},
{
"start": 699,
"end": 707,
"text": "Table 2,",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Modifying CIM",
"sec_num": "2.4"
},
{
"text": "A solution to the problem may be not to use MPCs, but to use a measure of contribution of a feature to a class which is proportional to the discrimination score of the feaure and the normalized conditional probability of the class given the feature. The modified CIM can be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying CIM",
"sec_num": "2.4"
},
{
"text": "m Rel(classj, X)= L x;W;j (7) i=l A def DS A ( 8 ) W;j = ; X PJi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modifying CIM",
"sec_num": "2.4"
},
{
"text": "As shown in Table 3 , the \u2022w12 is larger than tu13 (0.3356 > 0) and the instance is classified not as class1 but as class2 because Rei ( class2, X) = 149 1.0028 > Rel(class1 , X) = 0.7831, which is based on the modified CIM.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 3",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Modifying CIM",
"sec_num": "2.4"
},
{
"text": "We used three types of features for word sense disambiguation: local, topical and bigram context. In the preliminary experiment, we have observed that, when the CIM considered all these three types of features, it mostly achieved the best result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Space",
"sec_num": "2.5"
},
{
"text": "In a local context, there can be features of the following templates for all words within its window:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "\u2022 in the English lexical sample task -word_position : a word and its position -word_POS : a word and its part-of-speech -POS_position : the part-of-speech and position of a word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "\u2022 in the Korean lexical sample task -rnorpheme_position : a morpheme 1 and its position.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "-rnorpheme_POS : a morpheme and its partof-speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "-POS_position : the part-of-speech and position of a morpheme",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "In the English lexical sample task, word is a surface form and can be either one of open-class words whose POS is one of the noun, verb, adjective, and adverb; or one of closed-class words whose POS is one of the determiner, preposition, pronoun, and punctuation. The window size of \u00b13 words in the English lexical sample task and the window size from -2 to +3 word in the Korean lexical sample task were empirically chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "In the first phase of the experiments, we used just one complicated template, word_position_POS (in Korean morpheme_position_POS), which brought about data sparseness problem. So we split the template into three simpler templates. The window size of \u00b11 sentences in the English lexical sample task and the window size of all sentences in the Korean lexical sample task were empirically chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local context",
"sec_num": "2.5.1"
},
{
"text": "In a bigrarn context, there can be features of the following templates for all word-pairs within its window:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram context",
"sec_num": "2.5.3"
},
{
"text": "\u2022 in the English lexical sample task -(word;, wordj) word (i>j) the i-th word and j-th (word;, POSj) : the i-th word and j-th part-of-speech ( i > j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram context",
"sec_num": "2.5.3"
},
{
"text": "\u2022 in the Korean lexical sample task -(eojeol;, eojcolj) : the i-th eojeol and j-th eojeol (i>j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram context",
"sec_num": "2.5.3"
},
{
"text": "\"Cnlike local and topical contexts, bigram contexts are composed of only bigrarn information surrounding the polysemous word. The window size of \u00b12 words in the English lexical sample task and the window size from -2 to +3 word in the Korean lexical sample task were empirically chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bigram context",
"sec_num": "2.5.3"
},
{
"text": "The following tables show the results of our systems at SENSEVAL-2 (Table 4) . For the Korean lexical sample task at SENSEVAL-2, only fine-grained sense distinction was made. ",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 76,
"text": "(Table 4)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Result",
"sec_num": "3"
},
{
"text": "We have described the modified CIM used for word sense disambiguation at SENSEVAL-2. In the experiments, three types of features; local, topical, and bigram context, are used. Our system ranked as the highest at the Korean lexical sample task and as the topmost group at the English lexical sample task among the supervised models at SENSEVAL-2. Consequently, the results back up the fact that the modified CIM and three types of features are useful for discriminating word senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "A Korean sentence is composed of one or more eojeols, which are separated by spaces, and an eojeol consists of one or more morphemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Some advances in rule-based part of speech tagging",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the Twelfth National Conference on Artificial Intelligence ( AAAI-94}",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill 1994. Some advances in rule-based part of speech tagging. In Proceedings of the Twelfth National Conference on Artificial Intel- ligence ( AAAI-94}.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Topical/Local Classifier for Word Sense Identification",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "",
"pages": "115--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Chodorow, Claudia Leacock and George A. Miller 2000. A Topical/Local Classifier for Word Sense Identification. In Computers and the Hu- manities 34: 115-120.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Word Sense Disambiguation Based on The Information Theory",
"authors": [
{
"first": "Ro",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Dae-Ro",
"middle": [],
"last": "Baek",
"suffix": ""
},
{
"first": "Rae-Chang",
"middle": [],
"last": "Rim",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of Research on Computational Linguisitcs Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ro Lee, Dae-Ro Baek and Rae-Chang Rim 1997. Word Sense Disambiguation Based on The In- formation Theory. In Proceedings of Research on Computational Linguisitcs Conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Word Sense Disambiguation Using the Classification Information Model",
"authors": [
{
"first": "Ro",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Rae-Chang",
"middle": [],
"last": "Rim",
"suffix": ""
},
{
"first": "Jungyun",
"middle": [],
"last": "Seo",
"suffix": ""
}
],
"year": 2000,
"venue": "Computers and the Humanities",
"volume": "34",
"issue": "",
"pages": "141--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ro Lee, Rae-Chang Rim and JungYun Seo 2000. Word Sense Disambiguation Using the Classifica- tion Information Model. In Computers and the Humanities 34: 141-146.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Flow chart of KUNLP system 2.1 Corpus preprocessing"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "phrase filtering and sense disambiguation if the number of filtered senses = 1 then determine sense else if the number of filtered senses > 1 then execute sense-tagger with the filtered senses else if the number of filtered senses = 0 then execute sense-tagger with all senses"
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A topical context includes features of the following templates for all open-class words within its window: \u2022 in the English lexical sample task -word : an open-class word. \u2022 in the Korean lexical sample task -morpheme : an open-class morpheme."
},
"TABREF0": {
"num": null,
"html": null,
"text": "shows the weights and the relevance values at the CIM using Wij and at the modified CIM using 'Wij, for the instance ofTable 2. The feature h co-occurred with class1 and class 2 and the MPC of h is class1 at Table 2. In the CIM, this feature",
"content": "<table/>",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"text": "A normalized conditional probability, DSs and MPCs of features of an instance",
"content": "<table><tr><td/><td colspan=\"4\">normalized conditional probability(pj;)</td><td/></tr><tr><td colspan=\"4\">feature class 1 class2 class3</td><td>class4</td><td>DS</td><td>MPC</td></tr><tr><td>h fz</td><td>0.7 0</td><td>0.3 0.4</td><td>0 0.6</td><td>0 0</td><td colspan=\"2\">1.1187 class1 1.0290 class3</td></tr><tr><td>h</td><td>0</td><td>0.4</td><td>0.1</td><td>0.5</td><td colspan=\"2\">0.6390 class4</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"text": "The weights and the relevance values at the CIM using w;J and at the modified CIM using",
"content": "<table><tr><td>w;j, for</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"text": "Results of KUNLP systems at SENSEVAL-2 I task I prec. I recall English Lexical Sample (fine g.) 0.629 0.629 English Lexical Sample (coarse g.) 0.697 0.697 Korean Lexical Sample (fine g.) 0.698 0.74",
"content": "<table/>",
"type_str": "table"
}
}
}
}