Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S07-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:22:48.323296Z"
},
"title": "SemEval-2007 Task 11: English Lexical Sample Task via English-Chinese Parallel Text",
"authors": [
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"addrLine": "3 Science Drive 2",
"postCode": "117543",
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Yee",
"middle": [
"Seng"
],
"last": "Chan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {
"addrLine": "3 Science Drive 2",
"postCode": "117543",
"country": "Singapore"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We made use of parallel texts to gather training and test examples for the English lexical sample task. Two tracks were organized for our task. The first track used examples gathered from an LDC corpus, while the second track used examples gathered from a Web corpus. In this paper, we describe the process of gathering examples from the parallel corpora, the differences with similar tasks in previous SENSEVAL evaluations, and present the results of participating systems.",
"pdf_parse": {
"paper_id": "S07-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We made use of parallel texts to gather training and test examples for the English lexical sample task. Two tracks were organized for our task. The first track used examples gathered from an LDC corpus, while the second track used examples gathered from a Web corpus. In this paper, we describe the process of gathering examples from the parallel corpora, the differences with similar tasks in previous SENSEVAL evaluations, and present the results of participating systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As part of the SemEval-2007 evaluation exercise, we organized an English lexical sample task for word sense disambiguation (WSD), where the senseannotated examples were semi-automatically gathered from word-aligned English-Chinese parallel texts. Two tracks were organized for this task, each gathering data from a different corpus. In this paper, we describe our motivation for organizing the task, our task framework, and the results of participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Past research has shown that supervised learning is one of the most successful approaches to WSD. However, this approach involves the collection of a large text corpus in which each ambiguous word has been annotated with the correct sense to serve as training data. Due to the expensive annotation process, only a handful of manually sense-tagged corpora are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An effort to alleviate the training data bottleneck is the Open Mind Word Expert (OMWE) project (Chklovski and Mihalcea, 2002) to collect sense-tagged data from Internet users. Data gathered through the OMWE project were used in the SENSEVAL-3 English lexical sample task. In that task, WordNet-1.7.1 was used as the sense inventory for nouns and adjectives, while Wordsmyth 1 was used as the sense inventory for verbs.",
"cite_spans": [
{
"start": 81,
"end": 87,
"text": "(OMWE)",
"ref_id": null
},
{
"start": 96,
"end": 126,
"text": "(Chklovski and Mihalcea, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another source of potential training data is parallel texts. Our past research in (Ng et al., 2003; Chan and Ng, 2005) has shown that examples gathered from parallel texts are useful for WSD. Briefly, after manually assigning appropriate Chinese translations to each sense of an English word, the English side of a word-aligned parallel text can then serve as the training data, as they are considered to have been disambiguated and \"sense-tagged\" by the appropriate Chinese translations.",
"cite_spans": [
{
"start": 82,
"end": 99,
"text": "(Ng et al., 2003;",
"ref_id": "BIBREF4"
},
{
"start": 100,
"end": 118,
"text": "Chan and Ng, 2005)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using the above approach, we gathered the training and test examples for our task from parallel texts. Note that our examples are collected without manually annotating each individual ambiguous word occurrence, allowing us to gather our examples in a much shorter time. This contrasts with the setting of the English lexical sample task in previous SENSE-VAL evaluations. In the English lexical sample task of SENSEVAL-2, the sense tagged data were created through manual annotation by trained lexicographers. In SENSEVAL-3, the data were gathered through manual sense annotation by Internet users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the next section, we describe in more detail the process of gathering examples from parallel texts and the two different parallel corpora we used. We then give a brief description of each of the partici-pating systems. In Section 4, we present the results obtained by the participants, before concluding in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To gather examples from parallel corpora, we followed the approach in (Ng et al., 2003) . Briefly, after ensuring the corpora were sentence-aligned, we tokenized the English texts and performed word segmentation on the Chinese texts (Low et al., 2005) .",
"cite_spans": [
{
"start": 70,
"end": 87,
"text": "(Ng et al., 2003)",
"ref_id": "BIBREF4"
},
{
"start": 233,
"end": 251,
"text": "(Low et al., 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gathering Examples from Parallel Corpora",
"sec_num": "2"
},
{
"text": "We then made use of the GIZA++ software (Och and Ney, 2000) to perform word alignment on the parallel corpora. Then, we assigned some possible Chinese translations to each sense of an English word w. From the word alignment output of GIZA++, we selected those occurrences of w which were aligned to one of the Chinese translations chosen. The English side of these occurrences served as training data for w, as they were considered to have been disambiguated and \"sense-tagged\" by the appropriate Chinese translations. The English half of the parallel texts (each ambiguous English word and its 3sentence context) were used as the training and test material to set up our English lexical sample task. Note that in our approach, the sense distinction is decided by the different Chinese translations assigned to each sense of a word. This is thus similar to the multilingual lexical sample task in SENSEVAL-3 (Chklovski et al., 2004) , except that our training and test examples are collected without manually annotating each individual ambiguous word occurrence. The average time needed to assign Chinese translations for one noun and one adjective is 20 minutes and 25 minutes respectively. This is a relatively short time, compared to the effort otherwise needed to manually sense annotate individual word occurrences. Also, once the Chinese translations are assigned, more examples can be automatically gathered as more parallel texts become available.",
"cite_spans": [
{
"start": 40,
"end": 59,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF5"
},
{
"start": 908,
"end": 932,
"text": "(Chklovski et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gathering Examples from Parallel Corpora",
"sec_num": "2"
},
{
"text": "We note that frequently occurring words are usually highly polysemous and hard to disambiguate. To maximize the benefits of our work, we gathered training data from parallel texts for a set of most frequently occurring noun and adjective types in the Brown Corpus. Also, similar to the SENSEVAL-3 English lexical sample task, we used WordNet-1.7.1 as our sense inventory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gathering Examples from Parallel Corpora",
"sec_num": "2"
},
{
"text": "We have two tracks for this task, each track using a different corpus. The first corpus is the Chinese English News Magazine Parallel Text (LDC2005T10), which is an English-Chinese parallel corpus available from the Linguistic Data Consortium (LDC). From this parallel corpus, we gathered examples for 50 English words (25 nouns and 25 adjectives) using the method described above. From the gathered examples of each word, we randomly selected training and test examples, where the number of training examples is about twice the number of test examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LDC Corpus",
"sec_num": "2.1"
},
{
"text": "The rows LDC noun and LDC adjective in Table 1 give some statistics about the examples. For instance, each noun has an average of 197.6 training and 98.5 test examples and these examples represent an average of 5.2 senses per noun. 2 Participants taking part in this track need to have access to this LDC corpus in order to access the training and test material in this track.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 47,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "LDC Corpus",
"sec_num": "2.1"
},
{
"text": "Since not all interested participants may have access to the LDC corpus described in the previous subsection, the second track of this task makes use of English-Chinese documents gathered from the URL pairs given by the STRAND Bilingual Databases. 3 STRAND (Resnik and Smith, 2003) is a system that acquires document pairs in parallel translation automatically from the Web. Using this corpus, we gathered examples for 40 English words (20 nouns and 20 adjectives).",
"cite_spans": [
{
"start": 257,
"end": 281,
"text": "(Resnik and Smith, 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Web Corpus",
"sec_num": "2.2"
},
{
"text": "The rows Web noun and Web adjective in Table 1 show that we selected an average of 182.0 training and 91.3 test examples for each noun and these examples represent an average of 3.5 senses per noun. We note that the average number of senses per word for the Web corpus is slightly lower than that of the LDC corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Web Corpus",
"sec_num": "2.2"
},
{
"text": "To measure the annotation accuracy of examples gathered from the LDC corpus, we examined a random selection of 100 examples each from 5 nouns and 5 adjectives. From these 1,000 examples, we measured a sense annotation accuracy of 84.7%. These 10 words have an average of 8.6 senses per word in the WordNet-1.7.1 sense inventory. As described in (Ng et al., 2003) , when several senses of an English word are translated by the same Chinese word, we can collapse these senses to obtain a coarser-grained, lumped sense inventory. If we do this and measure the sense annotation accuracy with respect to a coarser-grained, lumped sense inventory, these 10 words will have an average of 6.5 senses per word and an annotation accuracy of 94.7%.",
"cite_spans": [
{
"start": 345,
"end": 362,
"text": "(Ng et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Accuracy",
"sec_num": "2.3"
},
{
"text": "For the Web corpus, we similarly examined a random selection of 100 examples each from 5 nouns and 5 adjectives. These 10 words have an average of 6.5 senses per word in WordNet-1.7.1 and the 1,000 examples have an average sense annotation accuracy of 85.0%. After sense collapsing, annotation accuracy is 95.3% with an average of 4.8 senses per word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Accuracy",
"sec_num": "2.3"
},
{
"text": "In our previous work (Ng et al., 2003) , we conducted experiments on the nouns of SENSEVAL-2 English lexical sample task. We found that there were cases where the same document contributed both training and test examples and this inflated the WSD accuracy figures. To avoid this, during our preparation of the LDC and Web data, we made sure that a document contributed only either training or test examples, but not both.",
"cite_spans": [
{
"start": 21,
"end": 38,
"text": "(Ng et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Test Data from Different Documents",
"sec_num": "2.4"
},
{
"text": "Three teams participated in the Web corpus track of our task, with each team employing one system. There were no participants in the LDC corpus track, possibly due to the licensing issues involved. All participating systems employed supervised learning and only used the training examples provided by us.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participating Systems",
"sec_num": "3"
},
{
"text": "The CITYU-HIF team from the City University of Hong Kong trained a naive Bayes (NB) classifier for each target word to be disambiguated, using knowledge sources such as parts-of-speech (POS) of neighboring words and single words in the surrounding context. They also experimented with using different sets of features for each target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CITYU-HIF",
"sec_num": "3.1"
},
{
"text": "The system submitted by the HIT-IR-WSD team from Harbin Institute of Technology used Support Vector Machines (SVM) with a linear kernel function as the learning algorithm. Knowledge sources used included POS of surrounding words, local collocations, single words in the surrounding context, and syntactic relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "HIT-IR-WSD",
"sec_num": "3.2"
},
{
"text": "The system submitted by the PKU team from Peking University used a combination of SVM and maximum entropy classifiers. Knowledge sources used included POS of surrounding words, local collocations, and single words in the surrounding context. Feature selection was done by ignoring word features with certain associated POS tags and by selecting the subset of features based on their entropy values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PKU",
"sec_num": "3.3"
},
{
"text": "As all participating systems gave only one answer for each test example, recall equals precision and we will only report micro-average recall on the Web corpus track in this section. Table 2 gives the overall results obtained by each of the systems when evaluated on all the test examples of the Web corpus. We note that all the participants obtained scores which exceed the baseline heuristic of tagging all test examples with the most frequent sense (MFS) in the training data. This suggests that the Chinese translations assigned to senses of the ambiguous words are appropriate and provide sense distinctions which are clear enough for effective classifiers to be learned. In Table 3 and Table 4 , we show the scores obtained by each system on each of the 20 nouns and 20 adjectives. For comparison purposes, we also show the corresponding MFS score of each word. Paired t-test on the results of the top two systems show no significant difference between them.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 190,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 680,
"end": 699,
"text": "Table 3 and Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "We organized an English lexical sample task using examples gathered from parallel texts. Unlike the English lexical task of previous SENSEVAL evaluations where each example is manually annotated, we Table 4 : Micro-average scores of the most frequent sense baseline and the various participants on each adjective.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "only need to assign appropriate Chinese translations to each sense of a word. Once this is done, we automatically gather training and test examples from the parallel texts. All the participating systems of our task obtain results that are significantly better than the most frequent sense baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.wordsmyth.net",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Only senses present in the examples are counted. 3 http://www.umiacs.umd.edu/\u223cresnik/strand",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Yee Seng Chan is supported by a Singapore Millennium Foundation Scholarship (ref no. SMF-2004-1076.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Scaling up word sense disambiguation via parallel texts",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of AAAI05",
"volume": "",
"issue": "",
"pages": "1037--1042",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005. Scaling up word sense disambiguation via parallel texts. In Proceedings of AAAI05, pages 1037-1042, Pittsburgh, Pennsylvania, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Building a sense tagged corpus with Open Mind Word Expert",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL02 Workshop on Word Sense Disambiguation: Recent Successes and Future Directions",
"volume": "",
"issue": "",
"pages": "116--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski and Rada Mihalcea. 2002. Building a sense tagged corpus with Open Mind Word Expert. In Proceedings of ACL02 Workshop on Word Sense Disambiguation: Recent Successes and Future Direc- tions, pages 116-122, Philadelphia, USA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The SENSEVAL-3 multilingual English-Hindi lexical sample task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of SENSEVAL-3",
"volume": "",
"issue": "",
"pages": "5--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Chklovski, Rada Mihalcea, Ted Pedersen, and Amruta Purandare. 2004. The SENSEVAL-3 multi- lingual English-Hindi lexical sample task. In Proceed- ings of SENSEVAL-3, pages 5-8, Barcelona, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A maximum entropy approach to Chinese word segmentation",
"authors": [
{
"first": "Jin",
"middle": [
"Kiat"
],
"last": "Low",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Wenyuan",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "161--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Kiat Low, Hwee Tou Ng, and Wenyuan Guo. 2005. A maximum entropy approach to Chinese word segmen- tation. In Proceedings of the Fourth SIGHAN Work- shop on Chinese Language Processing, pages 161- 164, Jeju Island, Korea.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Exploiting parallel texts for word sense disambiguation: An empirical study",
"authors": [
{
"first": "Bin",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Yee Seng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL03",
"volume": "",
"issue": "",
"pages": "455--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Ex- ploiting parallel texts for word sense disambiguation: An empirical study. In Proceedings of ACL03, pages 455-462, Sapporo, Japan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceeedings of ACL00",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2000. Improved sta- tistical alignment models. In Proceeedings of ACL00, pages 440-447, Hong Kong.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The web as a parallel corpus",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "3",
"pages": "349--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik and Noah A. Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349-380.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"content": "<table><tr><td>Noun</td><td colspan=\"4\">MFS CITYU-HIF HIT-IR-WSD PKU</td></tr><tr><td>age</td><td>0.486</td><td>0.643</td><td>0.743</td><td>0.700</td></tr><tr><td>area</td><td>0.480</td><td>0.693</td><td>0.773</td><td>0.773</td></tr><tr><td>body</td><td>0.872</td><td>0.897</td><td>0.910</td><td>0.923</td></tr><tr><td>change</td><td>0.411</td><td>0.400</td><td>0.578</td><td>0.611</td></tr><tr><td>director</td><td>0.580</td><td>0.890</td><td>0.960</td><td>0.960</td></tr><tr><td>experience</td><td>0.830</td><td>0.830</td><td>0.880</td><td>0.840</td></tr><tr><td>future</td><td>0.889</td><td>0.889</td><td>0.990</td><td>0.990</td></tr><tr><td>interest</td><td>0.308</td><td>0.165</td><td>0.813</td><td>0.780</td></tr><tr><td>issue</td><td>0.651</td><td>0.711</td><td>0.892</td><td>0.855</td></tr><tr><td>life</td><td>0.820</td><td>0.830</td><td>0.860</td><td>0.740</td></tr><tr><td>material</td><td>0.719</td><td>0.719</td><td>0.781</td><td>0.641</td></tr><tr><td>need</td><td>0.907</td><td>0.907</td><td>0.918</td><td>0.918</td></tr><tr><td colspan=\"2\">performance 0.410</td><td>0.570</td><td>0.690</td><td>0.700</td></tr><tr><td>program</td><td>0.590</td><td>0.590</td><td>0.730</td><td>0.690</td></tr><tr><td>report</td><td>0.870</td><td>0.840</td><td>0.880</td><td>0.870</td></tr><tr><td>system</td><td>0.510</td><td>0.700</td><td>0.610</td><td>0.730</td></tr><tr><td>time</td><td>0.455</td><td>0.673</td><td>0.733</td><td>0.693</td></tr><tr><td>today</td><td>0.800</td><td>0.750</td><td>0.800</td><td>0.780</td></tr><tr><td>water</td><td>0.882</td><td>0.921</td><td>0.868</td><td>0.895</td></tr><tr><td>work</td><td>0.644</td><td>0.743</td><td>0.842</td><td>0.891</td></tr><tr><td>Micro-avg</td><td>0.656</td><td>0.719</td><td>0.813</td><td>0.802</td></tr></table>",
"text": "Overall micro-average scores of the participants and the most frequent sense (MFS) baseline.",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table/>",
"text": "Micro-average scores of the most frequent sense baseline and the various participants on each noun.",
"html": null,
"type_str": "table",
"num": null
}
}
}
}