Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S01-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:35:41.013048Z"
},
"title": "SENSEVAL-2 Japanese Dictionary Task",
"authors": [
{
"first": "Kiyoaki",
"middle": [],
"last": "Shirai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Japan Advanced Institute of Science and Technology",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports an overview of the SENSEVAL-2 Japanese dictionary task. It was a lexical sample task, and word senses are defined according to a Japanese dictionary, the Iwanami Kokugo Jiten. The Iwanami Kokugo Jiten and a training corpus were distributed to all participants. The number of target words was 100, 50 nouns and 50 verbs. One hundred instances of each target word were provided, making for a total of 10,000 instances for evaluation. Seven systems of three organizations participated in this task.",
"pdf_parse": {
"paper_id": "S01-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports an overview of the SENSEVAL-2 Japanese dictionary task. It was a lexical sample task, and word senses are defined according to a Japanese dictionary, the Iwanami Kokugo Jiten. The Iwanami Kokugo Jiten and a training corpus were distributed to all participants. The number of target words was 100, 50 nouns and 50 verbs. One hundred instances of each target word were provided, making for a total of 10,000 instances for evaluation. Seven systems of three organizations participated in this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In SENSEVAL-2, there are two Japanese tasks, a translation task and a dictionary task. This paper describes the details of the dictionary task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "First of all, let me introduce an overview of the Japanese dictionary task. This task is a lexical sample task. Word senses were defined according to the Iwanami Kokugo Jiten (Nishio et aL, 1994) , a Japanese dictionary published by Iwanami Shoten. It was distributed to all participants as a sense inventory. Training data, a corpus consisting of 3,000 newspaper articles and manually annotated with sense IDs, was also distributed to participants. For evaluation, we distributed newspaper articles with marked target words as test documents. Participants were required to assign one or more sense IDs to each target word, optionally with associated probabilities. The number of target words was 100, 50 nouns and 50 verbs. One hundred instances of each target word were provided, making for a total of 10,000 instances.",
"cite_spans": [
{
"start": 175,
"end": 195,
"text": "(Nishio et aL, 1994)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, Section 2 describes details of data used in the Japanese dictionary task. Section 3 describes the process to construct the gold standard data, including the analysis of inter-tagger agreement. Section 4 briefly introduces participating systems and their results. Finally, Section 5 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the Japanese dictionary task, three data were distributed to all participants: sense inventory, training data and evaluation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "As described in Section 1, word senses are defined according to a Japanese dictionary, the Iwanami Kokugo Jiten. The number of headwords and word senses in the I wanami Kokugo Jiten is 60,321 and 85,870, respectively. Figure 1 shows an example of word sense descriptions in the Iwanami Kokugo Jiten, the sense set of the Japanese noun \"MURI.\" MURI As shown in Figure 1 , there are hierarchical structures in word sense descriptions. For example, word sense 1 subsumes 1-a and 1-b. The number of layers of hierarchy in the I wanami Kokugo Jiten is at most 3. Word sense distinctions in the lowest level are rather fine or subtle. Furthermore, a word sense description sometimes contains example sentences including a headword, indicated by italics in Figure 1 . The Iwanami Kokugo Jiten was provided to all participants. For each sense description, a corresponding sense ID and morphological information were supplied. All morphological information, which included word segmentation, part-of-speech (POS) tag, base form and reading, was manually post-edited.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 360,
"end": 368,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 750,
"end": 758,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Sense Inventory",
"sec_num": "2.1"
},
{
"text": "An annotated corpus was distributed as the training data. It was made up of 3,000 newspaper articles extracted from the 1994 Mainichi Shimbun, consisting of 888,000 words. The annotated information in the training corpus was as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "\u2022 Morphological information",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "The text was annotated with morphological information (word segmentation, POS tag, base form and reading) for all words. All morphological information was manually post-edited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "\u2022 UDC code Each article was assigned a code representing the text class. The classification code system was the third version (INFOSTA, 1994) of Universal Decimal Classification (UDC) code (Organization, 1993).",
"cite_spans": [
{
"start": 126,
"end": 141,
"text": "(INFOSTA, 1994)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "\u2022 Word sense IDs Only 148,558 words in the text were annotated for sense. Words assigned with sense IDs satisfied the following conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "1. Their FOSs were noun, verb or adjective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "2. The Iwanami Kokugo Jiten gave sense descriptions for them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "3. They were ambiguous, i.e. there are more than two word senses in the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "Word sense IDs were manually annotated. However, only one annotator assigned a sen~e ID for each word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data",
"sec_num": "2.2"
},
{
"text": "The evaluation data was made up of 2,130 newspaper articles extracted from the 1994 Mainichi Shimbun. The articles used for the training and evaluation data were mutually exclusive. The annotated information in the evaluation data was as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.3"
},
{
"text": "\u2022 Morphological information The text was annotated with morphological information (word segmentation, POE tag, base form and reading) for all words Note that morphological information in thE training data was manually post-edited: but not in the evaluation data. So participants might ignore morphological information in the evaluation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.3"
},
{
"text": "\u2022 UDC code As in the training data. each article was assigned a UDC code",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.3"
},
{
"text": "\u2022 Word sense IDs (gold standard data) Word sense IDs were annotated manually for the target words 1 . Note that word sense IDs in the evaluation and training data were given in different ways: (1) a sense ID was assigned for each word by at least two annotators in the evaluation data, while by only one annotator in the training data, (2) only 10,000 instances in the articles were annotated with sense IDs in the evaluation data, while all words were annotated which satisfied the conditions described in 2.2 in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "2.3"
},
{
"text": "Except for the gold standard data, the data described in Section 2 have been developed by Real World Computing Partnership (Hasida et al., 1998; Shirai et al., 2001 ) and already released to public domain 2 . On the other hand, the gold standard data was newly developed for the SENSEVAL-2. This section presents the process of preparing the gold standard data, and the analysis of inter-tagger agreement.",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "(Hasida et al., 1998;",
"ref_id": "BIBREF0"
},
{
"start": 145,
"end": 164,
"text": "Shirai et al., 2001",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Data",
"sec_num": "3"
},
{
"text": "When we chose target words, we considered the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Target Words",
"sec_num": "3.1"
},
{
"text": "\u2022 POSs of target words were either nouns or verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Target Words",
"sec_num": "3.1"
},
{
"text": "\u2022 Words were chosen which occurred more than 50 times in the training data. \u2022 The relative \"difficulty\" in disambiguating the sense of words was considered. Difficulty of the word w was defined by the entropy of the word sense distribution E(w) in the training data. Obviously, the higher E(w) was, the more difficult the WSD for w was. We set up three word classes, Da (E(w) ~ 1), Db (0.5 ~ E(w) < 1) and De (E(w) < 0.5), and chose target words evenly from them. One hundred instances of each target word were selected from newspaper articles, making for a total of 10,000 instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling Target Words",
"sec_num": "3.1"
},
{
"text": "Six annotators assigned the correct word sense IDs for 10,000 instances. They were not experts, but had knowledge of linguistics or lexicography to some degree. The process of manual annotating was as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Annotation",
"sec_num": "3.2"
},
{
"text": "Step 1. Two annotators chose a sense ID for each instance separately in accordance with the following guidelines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Annotation",
"sec_num": "3.2"
},
{
"text": "\u2022 Only one sense ID was to be chosen for each instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Annotation",
"sec_num": "3.2"
},
{
"text": "\u2022 Sense IDs at any layers in hierarchical structures could be assignable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Annotation",
"sec_num": "3.2"
},
{
"text": "\u2022 The \"UNASSIGNABLE\" tag was to be chosen only when all sense IDs weren't absolutely applicable. Otherwise, choose one of sense IDs in the dictionary. Step 2. If the sense IDs selected by 2 annotators agreed, we considered it to be a correct sense ID for an instance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Manual Annotation",
"sec_num": "3.2"
},
{
"text": "Step 3. If they did not agree, the third annotator chose the correct sense ID between them. If the third annotator judged both of them to be wrong and chose another sense ID as correct, we considered that all 3 word sense IDs were correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "35",
"sec_num": null
},
{
"text": "According to Step 3., the number of words for which 3 annotators assigned different sense IDs from one another was a quite few, 28 (0.3%). Table 2 indicates the inter-tagger agreement of two annotators in Step 1. Agreement ratio for all 10,000 instances was 86.3%.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "35",
"sec_num": null
},
{
"text": "In the Japanese dictionary task, the following 7 systems of 3 organizations submitted answers. Notice that all systems used supervised learning techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for Participating Systems",
"sec_num": "4"
},
{
"text": "New York University (CRL1 \"\" CRL4) The learning schemes were simple Bayes and support vector machine (SVM), and two kinds of hybrid models of simple Bayes and SVM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Communications Research Laboratory and",
"sec_num": null
},
{
"text": "\u2022 Tokyo Institute of Technology (Titech1, Titech2) Decision lists were learned from the training data. The features used in the decision lists were content words and POS tags in a window, and content words in example sentences contained in word sense descriptions in the Iwanami Kokugo Jiten.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 Communications Research Laboratory and",
"sec_num": null
},
{
"text": "\u2022 Nara Institute of Science and Technology (Naist) The learning algorithm was SVM. The feature space was reconstructed using Principle Component Analysis(PCA) and Independent Component Analysis(ICA). The results of all systems are shown in Figure 2. \"Baseline\" indicates the system which always selects the most frequent word sense ID, while \"Agreement\" indicates the agreement ratio between two annotators. All systems outperformed the baseline, and there was no remarkable difference between their scores (differences were 3 % at most). Figure 3 indicates the mixed-grained scores for nouns and verbs. Comparing baseline system scores, the score for verbs was greater than that for nouns, even though the average entropy of verbs was higher than that of nouns (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 249,
"text": "Figure 2.",
"ref_id": null
},
{
"start": 539,
"end": 547,
"text": "Figure 3",
"ref_id": null
},
{
"start": 762,
"end": 771,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "\u2022 Communications Research Laboratory and",
"sec_num": null
},
{
"text": "The situation was the same in CRL systems, bt not in Titech and Naist. The reason why them erage entropy was not coincident with the scor of the baseline was that the entropy of som verbs was so great that it raised the average er tropy disproportionately. Actually, the entrop of 7 verbs was greater than the maximum er tropy of nouns. Figure 4 indicates the mixed-grained score for each word class. For word class De, ther was hardly any difference among scores of a: systems, including Baseline system and Agree ment. On the other hand, appreciable differenc was found for Da and Db.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 345,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "\u2022 Communications Research Laboratory and",
"sec_num": null
},
{
"text": "This paper reports an overview of th SENSEVAL-2 Japanese dictionary task. Th data used in this task are available on th SENSEVAL-2 web site. I hope this valuabl, data helps all researchers to improve their WSI systems. Acknowledgment I wish to express my gratitude to Mainich Newspapers for providing articles. I would als< like to thank Prof. Takenobu Tokunaga (Toky< Institute of Technology) and Prof. Sadao Kuro hashi (University of Tokyo) for valuable advisi about task organization, the annotators for con\u2022 structing gold standard data, and all partici\u2022 pants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "They were hidden from participants at the contest. 2 Notice that the training data had been released to the public before the contest began. This violated the SENSEVAL-2 schedule constraint that answer submission should not occur more than 21 days after downloading the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The RWC texi databases",
"authors": [
{
"first": "Koiti",
"middle": [],
"last": "Hasida",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the the firs, International Conference on Language Re\u2022 sources and Evaluation",
"volume": "",
"issue": "",
"pages": "457--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koiti Hasida et al. 1998. The RWC texi databases. In Proceedings of the the firs, International Conference on Language Re\u2022 sources and Evaluation, pages 457~462.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Universal Decimal Classification",
"authors": [
{
"first": "",
"middle": [],
"last": "Infosta",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "INFOSTA. 1994. Universal Decimal Classifica- tion. Maruzen, Tokyo. (in Japanese).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Iwanami Kokugo Jiten Da\\ Go Han",
"authors": [
{
"first": "Minoru",
"middle": [],
"last": "Nishio",
"suffix": ""
},
{
"first": "Etsutaro",
"middle": [],
"last": "Iwabuchi",
"suffix": ""
},
{
"first": "Shizuc",
"middle": [],
"last": "Mizutani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minoru Nishio, Etsutaro Iwabuchi, and Shizuc Mizutani. 1994. Iwanami Kokugo Jiten Da\\ Go Han. Iwanami Publisher. (in Japanese).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Guide tc the Universal Decimal Classification (UDC)",
"authors": [],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "British Standards Organization. 1993. Guide tc the Universal Decimal Classification (UDC). BSI, London.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Text database with word sense tags defined by I wanami Japanese dictionary. BIG notes of Information Processing Society of Japan",
"authors": [
{
"first": "Kiyoaki",
"middle": [],
"last": "Shirai",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoaki Shirai et al. 2001. Text database with word sense tags defined by I wanami Japanese dictionary. BIG notes of Information Pro- cessing Society of Japan, 2001(9):117-122. (in Japanese).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Sense set of \"MURI\"",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Mixed-grained scores for word classes",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"text": "Number of Target Words",
"html": null,
"num": null,
"content": "<table><tr><td/><td>Da</td><td>Db</td><td>De</td><td>all</td></tr><tr><td colspan=\"5\">10 nouns (9.1/1.19) (3.7 /0.723) (3.3/0.248) (4.6/0.627) 20 20 50</td></tr><tr><td>verbs</td><td colspan=\"4\">10 (18/1.77) (6.7 /0.728) (5.2/0.244) (8.3/0.743) 20 20 50</td></tr><tr><td>all</td><td colspan=\"4\">20 (14/1.48) (5.2/0. 725) ( 4.2/0.246) (6.5/0.685) ~ 40 40 100</td></tr><tr><td/><td colspan=\"3\">(average polysemy j average entropy)</td></tr></table>"
},
"TABREF2": {
"type_str": "table",
"text": "",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"text": "Inter-tagger Agreement",
"html": null,
"num": null,
"content": "<table><tr><td/><td>Da</td><td>Db</td><td>De</td><td>(all)</td></tr><tr><td colspan=\"5\">nouns 0.809 0.786 0.957 0.859</td></tr><tr><td colspan=\"5\">verbs 0.699 0.896 0.922 0.867</td></tr><tr><td>all</td><td colspan=\"4\">0.754 0.841 0.939 0.863</td></tr></table>"
}
}
}
}