Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S10-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:27:53.393063Z"
},
"title": "SemEval-2010 Task 14: Word Sense Induction & Disambiguation",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of York",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Ioannis",
"middle": [
"P"
],
"last": "Klapaftis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of York",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Dmitriy",
"middle": [],
"last": "Dligach",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"country": "USA"
}
},
"email": ""
},
{
"first": "Sameer",
"middle": [
"S"
],
"last": "Pradhan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BBN Technologies Cambridge",
"location": {
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the description and evaluation framework of SemEval-2010 Word Sense Induction & Disambiguation task, as well as the evaluation results of 26 participating systems. In this task, participants were required to induce the senses of 100 target words using a training set, and then disambiguate unseen instances of the same words using the induced senses. Systems' answers were evaluated in: (1) an unsupervised manner by using two clustering evaluation measures, and (2) a supervised manner in a WSD task.",
"pdf_parse": {
"paper_id": "S10-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the description and evaluation framework of SemEval-2010 Word Sense Induction & Disambiguation task, as well as the evaluation results of 26 participating systems. In this task, participants were required to induce the senses of 100 target words using a training set, and then disambiguate unseen instances of the same words using the induced senses. Systems' answers were evaluated in: (1) an unsupervised manner by using two clustering evaluation measures, and (2) a supervised manner in a WSD task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word senses are more beneficial than simple word forms for a variety of tasks including Information Retrieval, Machine Translation and others (Pantel and Lin, 2002) . However, word senses are usually represented as a fixed-list of definitions of a manually constructed lexical database. Several deficiencies are caused by this representation, e.g. lexical databases miss main domain-specific senses (Pantel and Lin, 2002) , they often contain general definitions and suffer from the lack of explicit semantic or contextual links between concepts (Agirre et al., 2001) . More importantly, the definitions of hand-crafted lexical databases often do not reflect the exact meaning of a target word in a given context (V\u00e9ronis, 2004) .",
"cite_spans": [
{
"start": 142,
"end": 164,
"text": "(Pantel and Lin, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 399,
"end": 421,
"text": "(Pantel and Lin, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 546,
"end": 567,
"text": "(Agirre et al., 2001)",
"ref_id": "BIBREF1"
},
{
"start": 713,
"end": 728,
"text": "(V\u00e9ronis, 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised Word Sense Induction (WSI) aims to overcome these limitations of handconstructed lexicons by learning the senses of a target word directly from text without relying on any hand-crafted resources. The primary aim of SemEval-2010 WSI task is to allow comparison of unsupervised word sense induction and disambiguation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The target word dataset consists of 100 words, 50 nouns and 50 verbs. For each target word, participants were provided with a training set in order to learn the senses of that word. In the next step, participating systems were asked to disambiguate unseen instances of the same words using their learned senses. The answers of the systems were then sent to organisers for evaluation. Figure 1 provides an overview of the task. As can be observed, the task consisted of three separate phases.",
"cite_spans": [],
"ref_spans": [
{
"start": 384,
"end": 392,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the first phase, training phase, participating systems were provided with a training dataset that consisted of a set of target word (noun/verb) instances (sentences/paragraphs). Participants were then asked to use this training dataset to induce the senses of the target word. No other resources were allowed with the exception of NLP components for morphology and syntax. In the second phase, testing phase, participating systems were provided with a testing dataset that consisted of a set of target word (noun/verb) instances (sentences/paragraphs). Participants were then asked to tag (disambiguate) each testing instance with the senses induced during the training phase. In the third and final phase, the tagged test instances were received by the organisers in order to evaluate the answers of the systems in a supervised and an unsupervised framework. Table 1 shows the total number of target word instances in the training and testing set, as well as the average number of senses in the gold standard.",
"cite_spans": [],
"ref_spans": [
{
"start": 863,
"end": 870,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Task description",
"sec_num": "2"
},
{
"text": "The main difference of the SemEval-2010 as compared to the SemEval-2007 sense induction task is that the training and testing data are treated separately, i.e the testing data are only used for sense tagging, while the training data are only used The evaluation framework of SemEval-2010 WSI task considered two types of evaluation. In the first one, unsupervised evaluation, systems' answers were evaluated according to: (1) V-Measure (Rosenberg and Hirschberg, 2007) , and (2) paired F-Score (Artiles et al., 2009) . Neither of these measures were used in the SemEval-2007 WSI task. Manandhar & Klapaftis (2009) provide more details on the choice of this evaluation setting and its differences with the previous evaluation. The second type of evaluation, supervised evaluation, follows the supervised evaluation of the SemEval-2007 WSI task (Agirre and Soroa, 2007) . In this evaluation, induced senses are mapped to gold standard senses using a mapping corpus, and systems are then evaluated in a standard WSD task.",
"cite_spans": [
{
"start": 436,
"end": 468,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 494,
"end": 516,
"text": "(Artiles et al., 2009)",
"ref_id": "BIBREF2"
},
{
"start": 585,
"end": 613,
"text": "Manandhar & Klapaftis (2009)",
"ref_id": "BIBREF5"
},
{
"start": 843,
"end": 867,
"text": "(Agirre and Soroa, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task description",
"sec_num": "2"
},
{
"text": "The target word dataset consisted of 100 words, i.e. 50 nouns and 50 verbs. The training dataset for each target noun or verb was created by following a web-based semi-automatic method, similar to the method for the construction of Topic Signatures (Agirre et al., 2001) . Specifically, for each WordNet (Fellbaum, 1998) The created queries were issued to Yahoo! search API 3 and for each query a maximum of 1000 pages were downloaded. For each page we extracted fragments of text that occurred in <p> </p> html tags and contained the target word stem. In the final stage, each extracted fragment of text was POS-tagged using the Genia tagger (Tsuruoka and Tsujii, 2005) and was only retained, if the POS of the target word in the extracted text matched the POS of the target word in our dataset.",
"cite_spans": [
{
"start": 249,
"end": 270,
"text": "(Agirre et al., 2001)",
"ref_id": "BIBREF1"
},
{
"start": 304,
"end": 320,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF3"
},
{
"start": 643,
"end": 670,
"text": "(Tsuruoka and Tsujii, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training dataset",
"sec_num": "2.1"
},
{
"text": "The testing dataset consisted of instances of the same target words from the training dataset. This dataset is part of OntoNotes (Hovy et al., 2006) . We used the sense-tagged dataset in which sentences containing target word instances are tagged with OntoNotes (Hovy et al., 2006) ",
"cite_spans": [
{
"start": 129,
"end": 148,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF4"
},
{
"start": 262,
"end": 281,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Testing dataset",
"sec_num": "2.2"
},
{
"text": "For the purposes of this section we provide an example (Table 3) in which a target word has 181 instances and 3 GS senses. A system has generated a clustering solution with 4 clusters covering all instances. Table 3 shows the number of common instances between clusters and GS senses.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 64,
"text": "(Table 3)",
"ref_id": "TABREF5"
},
{
"start": 208,
"end": 215,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Evaluation framework",
"sec_num": "3"
},
{
"text": "This section presents the measures of unsupervised evaluation, i.e V-Measure (Rosenberg and Hirschberg, 2007) and 2paired F-Score (Artiles et al., 2009) .",
"cite_spans": [
{
"start": 77,
"end": 109,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 130,
"end": 152,
"text": "(Artiles et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised evaluation",
"sec_num": "3.1"
},
{
"text": "Let w be a target word with N instances (data points) in the testing dataset. Let K = {C j |j = 1 . . . n} be a set of automatically generated clusters grouping these instances, and S = {G i |i = 1 . . . m} the set of gold standard classes containing the desirable groupings of w instances. V-Measure (Rosenberg and Hirschberg, 2007) assesses the quality of a clustering solution by explicitly measuring its homogeneity and its completeness. Homogeneity refers to the degree that each cluster consists of data points primarily belonging to a single GS class, while completeness refers to the degree that each GS class consists of data points primarily assigned to a single cluster (Rosenberg and Hirschberg, 2007) . Let h be homogeneity and c completeness. V-Measure is the harmonic mean of h and c, i.e.",
"cite_spans": [
{
"start": 301,
"end": 333,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 681,
"end": 713,
"text": "(Rosenberg and Hirschberg, 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "V M = 2\u2022h\u2022c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "h+c . Homogeneity. The homogeneity, h, of a clustering solution is defined in Formula 1, where H(S|K) is the conditional entropy of the class distribution given the proposed clustering and H(S) is the class entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h = 1, if H(S) = 0 1 \u2212 H(S|K) H(S) , otherwise",
"eq_num": "(1)"
}
],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(S) = \u2212 |S| i=1 |K| j=1 a ij N log |K| j=1 a ij N",
"eq_num": "(2)"
}
],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(S|K) = \u2212 |K| j=1 |S| i=1 a ij N log a ij |S| k=1 a kj",
"eq_num": "(3)"
}
],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "When H(S|K) is 0, the solution is perfectly homogeneous, because each cluster only contains data points that belong to a single class. However in an imperfect situation, H(S|K) depends on the size of the dataset and the distribution of class sizes. Hence, instead of taking the raw conditional entropy, V-Measure normalises it by the maximum reduction in entropy the clustering information could provide, i.e. H(S). When there is only a single class (H(S) = 0), any clustering would produce a perfectly homogeneous solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "Completeness. Symmetrically to homogeneity, the completeness, c, of a clustering solution is defined in Formula 4, where H(K|S) is the conditional entropy of the cluster distribution given the class distribution and H(K) is the clustering entropy. When H(K|S) is 0, the solution is perfectly complete, because all data points of a class belong to the same cluster.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "For the clustering example in Table 3 , homogeneity is equal to 0.404, completeness is equal to 0.37 and V-Measure is equal to 0.386.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c = 1, if H(K) = 0 1 \u2212 H(K|S) H(K) , otherwise",
"eq_num": "(4)"
}
],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "H(K) = \u2212 |K| j=1 |S| i=1 a ij N log |S| i=1 a ij N (5) H(K|S) = \u2212 |S| i=1 |K| j=1 a ij N log a ij |K| k=1 a ik (6) 3.1.2 Paired F-Score evaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "In this evaluation, the clustering problem is transformed into a classification problem. For each cluster C i we generate |C i | 2 instance pairs, where |C i | is the total number of instances that belong to cluster C i . Similarly, for each GS class G i we generate |G i | 2 instance pairs, where |G i | is the total number of instances that belong to GS class G i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "Let F (K) be the set of instance pairs that exist in the automatically induced clusters and F (S) be the set of instance pairs that exist in the gold standard. Precision can be defined as the number of common instance pairs between the two sets to the total number of pairs in the clustering solution (Equation 7), while recall can be defined as the number of common instance pairs between the two sets to the total number of pairs in the gold standard (Equation 8). Finally, precision and recall are combined to produce the harmonic mean (F S = 2\u2022P \u2022R P +R ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P = |F (K) \u2229 F (S)| |F (K)| (7) R = |F (K) \u2229 F (S)| |F (S)|",
"eq_num": "(8)"
}
],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "For example in Table 3 , we can generate 35 2 instance pairs for C 1 , 70 2 for C 2 , 71 2 for C 3 and 5 2 for C 4 , resulting in a total of 5505 instance pairs. In the same vein, we can generate 36 2 instance pairs for G 1 , 70 2 for G 2 and 75 2 for G 3 . In total, the GS classes contain 5820 instance pairs. There are 3435 common instance pairs, hence precision is equal to 62.39%, recall is equal to 59.09% and paired F-Score is equal to 60.69%.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "V-Measure evaluation",
"sec_num": "3.1.1"
},
{
"text": "In this evaluation, the testing dataset is split into a mapping and an evaluation corpus. The first one is used to map the automatically induced clusters to GS senses, while the second is used to evaluate methods in a WSD setting. This evaluation follows the supervised evaluation of SemEval-2007 WSI task (Agirre and Soroa, 2007) , with the difference that the reported results are an average of 5 random splits. This repeated random sampling was performed to avoid the problems of the SemEval-2007 WSI challenge, in which different splits were providing different system rankings.",
"cite_spans": [
{
"start": 306,
"end": 330,
"text": "(Agirre and Soroa, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised evaluation",
"sec_num": "3.2"
},
{
"text": "Let us consider the example in Table 3 and assume that this matrix has been created by using the mapping corpus. Table 3 shows that C 1 is more likely to be associated with G 3 , C 2 is more likely to be associated with G 2 , C 3 is more likely to be associated with G 3 and C 4 is more likely to be associated with G 1 . This information can be utilised to map the clusters to GS senses.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 113,
"end": 120,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Supervised evaluation",
"sec_num": "3.2"
},
{
"text": "Particularly, the matrix shown in Table 3 is normalised to produce a matrix M , in which each entry depicts the estimated conditional probability P (G i |C j ). Given an instance I of tw from the evaluation corpus, a row cluster vector IC is created, in which each entry k corresponds to the score assigned to C k to be the winning cluster for instance I. The product of IC and M provides a row sense vector, IG, in which the highest scoring entry a denotes that G a is the winning sense. For example, if we produce the row cluster vector [C 1 = 0.8, C 2 = 0.1, C 3 = 0.1, C 4 = 0.0], and Table 3 , then we would get a row sense vector in which G 3 would be the winning sense with a score equal to 0.43.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 589,
"end": 596,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Supervised evaluation",
"sec_num": "3.2"
},
{
"text": "In this section, we present the results of the 26 systems along with two baselines. The first baseline, Most Frequent Sense (MFS), groups all testing instances of a target word into one cluster. The second baseline, Random, randomly assigns an instance to one out of four clusters. The number of clusters of Random was chosen to be roughly equal to the average number of senses in the GS. This baseline is executed five times and the results are averaged. Table 4 shows the V-Measure (VM) performance of the 26 systems participating in the task. The last column shows the number of induced clusters of each system in the test set.The MFS baseline has a V-Measure equal to 0, since by definition its completeness is 1 and homogeneity is 0. All systems outperform this baseline, apart from one, whose V-Measure is equal to 0. Regarding the Random baseline, we observe that 17 perform better, which indicates that they have learned useful information better than chance. Table 5 : Paired F-Score unsupervised evaluation ters than the number of GS senses, although V-Measure does not increase monotonically with the number of clusters increasing. For that reason, we introduced the second unsupervised evaluation measure (paired F-Score) that penalises systems when they produce: (1) a higher number of clusters (low recall) or (2) a lower number of clusters (low precision), than the GS number of senses. Table 5 shows the performance of systems using the second unsupervised evaluation measure. In this evaluation, we observe that most of the systems perform better than Random. Despite that, none of the systems outperform the MFS baseline. It seems that systems generating a smaller number of clusters than the GS number of senses are biased towards the MFS, hence they are not able to perform better. On the other hand, systems generating a higher number of clusters are penalised by this measure. Systems generating a number of clusters roughly the same as the GS tend to conflate the GS senses lot more than the MFS. Table 6 shows the results of this evaluation for a 80-20 test set split, i.e. 80% for mapping and 20% for evaluation. The last columns shows the average number of GS senses identified by each system in the five splits of the evaluation datasets. Overall, 14 systems outperform the MFS, while 17 of them perform better than Random. stance, the highest ranked system in nouns is UoY, while in verbs Duluth-Mix-Narrow-Gap. It seems that depending on the part-of-speech of the target word, different algorithms, features and parameters' tuning have different impact. The supervised evaluation changes the distribution of clusters by mapping each cluster to a weighted vector of senses. Hence, it can potentially favour systems generating a high number of homogeneous clusters. For that reason, we applied a second testing set split, where 60% of the testing corpus was used for mapping and 40% for evaluation. Reducing the size of the mapping corpus allows us to observe, whether the above statement is correct, since systems with a high number of clusters would suffer from unreliable mapping. Table 7 shows the results of the second supervised evaluation. The ranking of participants did not change significantly, i.e. we observe only different rankings among systems belonging to the same participant. Despite that, Table 7 also shows that the reduction of the mapping corpus has a different impact on systems generating a larger number of clusters than the GS number of senses.",
"cite_spans": [],
"ref_spans": [
{
"start": 456,
"end": 463,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 968,
"end": 975,
"text": "Table 5",
"ref_id": null
},
{
"start": 1402,
"end": 1409,
"text": "Table 5",
"ref_id": null
},
{
"start": 2020,
"end": 2027,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 3111,
"end": 3118,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Evaluation results",
"sec_num": "4"
},
{
"text": "For instance, UoY that generates 11.54 clusters outperformed the MFS by 3.77% in the 80-20 split and by 3.71% in the 60-40 split. The reduction of the mapping corpus had a minimal impact on its performance. In contrast, KSU KDD that generates 17.5 clusters was below the MFS by 6.49% in the 80-20 split and by 7.83% in the 60-40 split. The reduction of the mapping corpus had a larger impact in this case. This result indicates that the performance in this evaluation also depends on the distribution of instances within the clusters. Systems generating a skewed distribution, in which a small number of homogeneous clusters tag the majority of instances and a larger number of clusters tag only a few instances, are likely to have a better performance than systems that produce a more uniform distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised evaluation results",
"sec_num": "4.2"
},
{
"text": "We presented the description, evaluation framework and assessment of systems participating in the SemEval-2010 sense induction task. The evaluation has shown that the current state-of-the-art lacks unbiased measures that objectively evaluate clustering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The results of systems have shown that their performance in the unsupervised and supervised evaluation settings depends on cluster granularity along with the distribution of instances within the clusters. Our future work will focus on the assessment of sense induction on a task-oriented basis as well as on clustering evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We gratefully acknowledge the support of the EU FP7 INDECT project, Grant No. 218086, the Na-tional Science Foundation Grant NSF-0715078, Consistent Criteria for Word Sense Disambiguation, and the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022, a subcontract from the BBN-AGILE Team.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2007 Task 02: Evaluating Word Sense Induction and Discrimination Systems",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of SemEval-2007",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre and Aitor Soroa. 2007. SemEval-2007 Task 02: Evaluating Word Sense Induction and Dis- crimination Systems. In Proceedings of SemEval- 2007, pages 7-12, Prague, Czech Republic. ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching Wordnet Concepts With Topic Signatures",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Olatz",
"middle": [],
"last": "Ansa",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Olatz Ansa, David Martinez, and Eduard Hovy. 2001. Enriching Wordnet Concepts With Topic Signatures. ArXiv Computer Science e-prints.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The role of named entities in web people search",
"authors": [
{
"first": "Javier",
"middle": [],
"last": "Artiles",
"suffix": ""
},
{
"first": "Enrique",
"middle": [],
"last": "Amig\u00f3",
"suffix": ""
},
{
"first": "Julio",
"middle": [],
"last": "Gonzalo",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "534--542",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Javier Artiles, Enrique Amig\u00f3, and Julio Gonzalo. 2009. The role of named entities in web people search. In Proceedings of EMNLP, pages 534-542. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Wordnet: An Electronic Lexical Database",
"authors": [
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christiane Fellbaum. 1998. Wordnet: An Electronic Lexical Database. MIT Press, Cambridge, Mas- sachusetts, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of NAACL, Com- panion Volume: Short Papers on XX, pages 57-60. ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semeval-2010 Task 14: Evaluation Setting for Word Sense Induction & Disambiguation Systems",
"authors": [
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Ioannis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Klapaftis",
"suffix": ""
}
],
"year": 2009,
"venue": "DEW '09: Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suresh Manandhar and Ioannis P. Klapaftis. 2009. Semeval-2010 Task 14: Evaluation Setting for Word Sense Induction & Disambiguation Systems. In DEW '09: Proceedings of the Workshop on Se- mantic Evaluations: Recent Achievements and Fu- ture Directions, pages 117-122, Boulder, Colorado, USA. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Discovering Word Senses from Text",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2002,
"venue": "KDD '02: Proceedings of the 8th ACM SIGKDD Conference",
"volume": "",
"issue": "",
"pages": "613--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Pantel and Dekang Lin. 2002. Discovering Word Senses from Text. In KDD '02: Proceedings of the 8th ACM SIGKDD Conference, pages 613- 619, New York, NY, USA. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Vmeasure: A Conditional Entropy-based External Cluster Evaluation Measure",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Rosenberg",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 EMNLP-CoNLL Joint Conference",
"volume": "",
"issue": "",
"pages": "410--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A Conditional Entropy-based External Cluster Evaluation Measure. In Proceedings of the 2007 EMNLP-CoNLL Joint Conference, pages 410- 420, Prague, Czech Republic.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bidirectional Inference With the Easiest-first Strategy for Tagging Sequence Data",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Jun\u00edchi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the HLT-EMNLP Joint Conference",
"volume": "",
"issue": "",
"pages": "467--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka and Jun\u00edchi Tsujii. 2005. Bidi- rectional Inference With the Easiest-first Strategy for Tagging Sequence Data. In Proceedings of the HLT-EMNLP Joint Conference, pages 467-474, Morristown, NJ, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Hyperlex: Lexical Cartography for Information Retrieval",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "V\u00e9ronis",
"suffix": ""
}
],
"year": 2004,
"venue": "Computer Speech & Language",
"volume": "18",
"issue": "3",
"pages": "223--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean V\u00e9ronis. 2004. Hyperlex: Lexical Cartography for Information Retrieval. Computer Speech & Lan- guage, 18(3):223-252.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Training, testing and evaluation phases of SemEval-2010 Task 14",
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "",
"content": "<table><tr><td>: Training &amp; testing set details</td></tr><tr><td>for sense induction. Treating the testing data as</td></tr><tr><td>new unseen instances ensures a realistic evalua-</td></tr><tr><td>tion that allows to evaluate the clustering models</td></tr><tr><td>of each participating system.</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "",
"content": "<table><tr><td>: Training set creation: example queries for</td></tr><tr><td>target word failure</td></tr><tr><td>to the target word sense for which the query was</td></tr><tr><td>created. The relations considered were WordNet's</td></tr><tr><td>hypernyms, hyponyms, synonyms, meronyms and</td></tr><tr><td>holonyms. Each query was manually checked by</td></tr><tr><td>one of the organisers to remove ambiguous words.</td></tr><tr><td>The following example shows the query created</td></tr><tr><td>for the first 1 and second 2 WordNet sense of the</td></tr><tr><td>target noun failure.</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "senses. The texts come from various news sources including CNN, ABC and others.",
"content": "<table><tr><td/><td>G1</td><td>G2</td><td>G3</td></tr><tr><td>C1</td><td>10</td><td>10</td><td>15</td></tr><tr><td>C2</td><td>20</td><td>50</td><td>0</td></tr><tr><td>C3</td><td>1</td><td>10</td><td>60</td></tr><tr><td>C4</td><td>5</td><td>0</td><td>0</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Clusters & GS senses matrix.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF8": {
"text": "",
"content": "<table><tr><td>also shows that V-Measure tends to</td></tr><tr><td>favour systems producing a higher number of clus-</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"text": "The ranking of systems in nouns and verbs is different. For in-",
"content": "<table><tr><td>System</td><td>SR (%)</td><td>SR (%)</td><td>SR (%)</td><td>#S</td></tr><tr><td/><td>(All)</td><td>(Nouns)</td><td>(Verbs)</td><td/></tr><tr><td>UoY</td><td>62.4</td><td>59.4</td><td>66.8</td><td>1.51</td></tr><tr><td>Duluth-WSI</td><td>60.5</td><td>54.7</td><td>68.9</td><td>1.66</td></tr><tr><td>Duluth-WSI-SVD</td><td>60.5</td><td>54.7</td><td>68.9</td><td>1.66</td></tr><tr><td>Duluth-WSI-Co-Gap</td><td>60.3</td><td>54.1</td><td>68.6</td><td>1.19</td></tr><tr><td>Duluth-WSI-Co</td><td>60.8</td><td>54.7</td><td>67.6</td><td>1.51</td></tr><tr><td>Duluth-WSI-Gap</td><td>59.8</td><td>54.4</td><td>67.8</td><td>1.11</td></tr><tr><td>KCDC-PC-2</td><td>59.8</td><td>54.1</td><td>68.0</td><td>1.21</td></tr><tr><td>KCDC-PC</td><td>59.7</td><td>54.6</td><td>67.3</td><td>1.39</td></tr><tr><td>KCDC-PCGD</td><td>59.5</td><td>53.3</td><td>68.6</td><td>1.47</td></tr><tr><td>KCDC-GDC</td><td>59.1</td><td>53.4</td><td>67.4</td><td>1.34</td></tr><tr><td>KCDC-GD</td><td>59.0</td><td>53.0</td><td>67.9</td><td>1.33</td></tr><tr><td>KCDC-PT</td><td>58.9</td><td>53.1</td><td>67.4</td><td>1.08</td></tr><tr><td>KCDC-GD-2</td><td>58.7</td><td>52.8</td><td>67.4</td><td>1.33</td></tr><tr><td>Duluth-WSI-SVD-Gap</td><td>58.7</td><td>53.2</td><td>66.7</td><td>1.01</td></tr><tr><td>MFS</td><td>58.7</td><td>53.2</td><td>66.6</td><td>1</td></tr><tr><td>Duluth-R-12</td><td>58.5</td><td>53.1</td><td>66.4</td><td>1.25</td></tr><tr><td>Hermit</td><td>58.3</td><td>53.6</td><td>65.3</td><td>2.06</td></tr><tr><td>Duluth-R-13</td><td>58.0</td><td>52.3</td><td>66.4</td><td>1.46</td></tr><tr><td>Random</td><td>57.3</td><td>51.5</td><td>65.7</td><td>1.53</td></tr><tr><td>Duluth-R-15</td><td>56.8</td><td>50.9</td><td>65.3</td><td>1.61</td></tr><tr><td>Duluth-Mix-Narrow-Gap</td><td>56.6</td><td>48.1</td><td>69.1</td><td>1.43</td></tr><tr><td>Duluth-Mix-Narrow-PK2</td><td>56.1</td><td>47.5</td><td>68.7</td><td>1.41</td></tr><tr><td>Duluth-R-110</td><td>54.8</td><td>48.3</td><td>64.2</td><td>1.94</td></tr><tr><td>KSU KDD</td><td>52.2</td><td>46.6</td><td>60.3</td><td>1.69</td></tr><tr><td>Duluth-MIX-PK2</td><td>51.6</td><td>41.1</td><td>67.0</td><td>1.23</td></tr><tr><td>Duluth-Mix-Gap</td><td>50.6</td><td>40.0</td><td>66.0</td><td>1.01</td></tr><tr><td>Duluth-Mix-Uni-PK2</td><td>19.3</td><td>1.8</td><td>44.8</td><td>0.62</td></tr><tr><td>Duluth-Mix-Uni-Gap</td><td>18.7</td><td>1.6</td><td>43.8</td><td>0.56</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF10": {
"text": "",
"content": "<table><tr><td>: Supervised recall (SR) (test set split:80%</td></tr><tr><td>mapping, 20% evaluation)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF12": {
"text": "",
"content": "<table><tr><td>: Supervised recall (SR) (test set split:60%</td></tr><tr><td>mapping, 40% evaluation)</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}