|
{ |
|
"paper_id": "S07-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:22:53.232727Z" |
|
}, |
|
"title": "SemEval-2007 Task 16: Evaluation of Wide Coverage Knowledge Resources", |
|
"authors": [ |
|
{ |
|
"first": "Montse", |
|
"middle": [], |
|
"last": "Cuadros", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALP Research Center Universitat Polit\u00e9cnica de Catalunya Barcelona", |
|
"location": { |
|
"country": "Spain" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "German", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "IXA NLP Group Euskal Herriko Unibersitatea Donostia", |
|
"institution": "", |
|
"location": { |
|
"country": "Spain" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This task tries to establish the relative quality of available semantic resources (derived by manual or automatic means). The quality of each large-scale knowledge resource is indirectly evaluated on a Word Sense Disambiguation task. In particular, we use Senseval-3 and SemEval-2007 English Lexical Sample tasks as evaluation bechmarks to evaluate the relative quality of each resource. Furthermore, trying to be as neutral as possible with respect the knowledge bases studied, we apply systematically the same disambiguation method to all the resources. A completely different behaviour is observed on both lexical data sets (Senseval-3 and SemEval-2007).", |
|
"pdf_parse": { |
|
"paper_id": "S07-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This task tries to establish the relative quality of available semantic resources (derived by manual or automatic means). The quality of each large-scale knowledge resource is indirectly evaluated on a Word Sense Disambiguation task. In particular, we use Senseval-3 and SemEval-2007 English Lexical Sample tasks as evaluation bechmarks to evaluate the relative quality of each resource. Furthermore, trying to be as neutral as possible with respect the knowledge bases studied, we apply systematically the same disambiguation method to all the resources. A completely different behaviour is observed on both lexical data sets (Senseval-3 and SemEval-2007).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Using large-scale knowledge bases, such as Word-Net (Fellbaum, 1998) , has become a usual, often necessary, practice for most current Natural Language Processing (NLP) systems. Even now, building large and rich enough knowledge bases for broad-coverage semantic processing takes a great deal of expensive manual effort involving large research groups during long periods of development. In fact, dozens of person-years have been invested in the development of wordnets for various languages (Vossen, 1998) . For example, in more than ten years of manual construction (from version 1.5 to 2.1), WordNet passed from 103,445 semantic relations to 245,509 semantic relations 1 . That is, around one thousand new relations per month. But this data does not seems to be rich enough to support advanced concept-based NLP applications directly. It seems that applications will not scale up to working in open domains without more detailed and rich general-purpose (and also domain-specific) semantic knowledge built by automatic means.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 68, |
|
"text": "Word-Net (Fellbaum, 1998)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 505, |
|
"text": "(Vossen, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Fortunately, during the last years, the research community has devised a large set of innovative methods and tools for large-scale automatic acquisition of lexical knowledge from structured and unstructured corpora. Among others we can mention eXtended WordNet (Mihalcea and Moldovan, 2001) , large collections of semantic preferences acquired from SemCor (Agirre and Martinez, 2001 ; Agirre and Martinez, 2002) or acquired from British National Corpus (BNC) (McCarthy, 2001 ), largescale Topic Signatures for each synset acquired from the web (Agirre and de la Calle, 2004) or acquired from the BNC (Cuadros et al., 2005) . Obviously, these semantic resources have been acquired using a very different set of methods, tools and corpora, resulting on a different set of new semantic relations between synsets (or between synsets and words).", |
|
"cite_spans": [ |
|
{ |
|
"start": 261, |
|
"end": 290, |
|
"text": "(Mihalcea and Moldovan, 2001)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 356, |
|
"end": 382, |
|
"text": "(Agirre and Martinez, 2001", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 411, |
|
"text": "Agirre and Martinez, 2002)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 474, |
|
"text": "(McCarthy, 2001", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 562, |
|
"end": 574, |
|
"text": "Calle, 2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 600, |
|
"end": 622, |
|
"text": "(Cuadros et al., 2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Many international research groups are working on knowledge-based WSD using a wide range of approaches (Mihalcea, 2006) . However, less attention has been devoted on analysing the quality of each semantic resource. In fact, each resource presents different volume and accuracy figures (Cuadros et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 119, |
|
"text": "(Mihalcea, 2006)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 307, |
|
"text": "(Cuadros et al., 2006)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we evaluate those resources on the SemEval-2007 English Lexical Sample task. For comparison purposes, we also include the results of the same resources on the Senseval-3 English Lexical sample task. In both cases, we used only the nominal part of both data sets and we also included some basic baselines.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In order to compare the knowledge resources, all the resources are evaluated as Topic Signatures (TS). That is, word vectors with weights associated to a particular synset. Normally, these word vectors are obtained by collecting from the resource under study the word senses appearing as direct relatives. This simple representation tries to be as neutral as possible with respect to the resources studied. A common WSD method has been applied to all knowledge resources on the test examples of Senseval-3 and SemEval-2007 English lexical sample tasks. A simple word overlapping counting is performed between the Topic Signature and the test example. The synset having higher overlapping word counts is selected. In fact, this is a very simple WSD method which only considers the topical information around the word to be disambiguated. Finally, we should remark that the results are not skewed (for instance, for resolving ties) by the most frequent sense in WN or any other statistically predicted knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As an example, table 1 shows a test example of SemEval-2007 corresponding to the first sense of the noun capital. In bold there are the words that appear in its corresponding Topic Signature acquired from the web.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Note that although there are several important related words, the WSD process implements exact word form matching (no preprocessing is performed).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Framework", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have designed a number of basic baselines in order to establish a complete evaluation framework for comparing the performance of each semantic resource on the English WSD tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Baselines", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "RANDOM: For each target word, this method selects a random sense. This baseline can be considered as a lower-bound. Train Topic Signatures (TRAIN): This baseline uses the training corpus to directly build a Topic Signature using TFIDF measure for each word sense. Note that this baseline can be considered as an upper-bound of our evaluation. Table 2 presents the precision (P), recall (R) and F1 measure (harmonic mean of recall and precision) of the different baselines in the English Lexical Sample exercise of Senseval-3. In this table, TRAIN has been calculated with a vector size of at maximum 450 words. As expected, RANDOM baseline obtains the poorest result. The most frequent senses obtained from SemCor (SEMCOR-MFS) and WN (WN-MFS) are both below the most frequent sense of the training corpus (TRAIN-MFS). However, all of them are far below the Topic Signatures acquired using the training corpus (TRAIN). Table 3 presents the precision (P), recall (R) and F1 measure (harmonic mean of recall and precision) of the different baselines in the English Lexical Sample exercise of SemEval-2007. Again, TRAIN has been calculated with a vector size of at maximum 450 words. As before, RANDOM baseline obtains the poorest result. The most frequent senses obtained from SemCor (SEMCOR-MFS) and WN (WN-MFS) are both far below the most frequent sense of the training corpus (TRAIN-MFS), and all of them are below the Topic Signatures acquired using the training corpus (TRAIN).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 343, |
|
"end": 350, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 925, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Basic Baselines", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Comparing both lexical sample sets, SemEval-2007 data appears to be more skewed and simple for WSD systems than the data set from Senseval-3: less <instance id=\"19:0@11@wsj/01/wsj 0128@wsj@en@on\" docsrc=\"wsj\"> <context> \" A sweeping restructuring of the industry is possible . \" Standard & Poor 's Corp. says First Boston , Shearson and Drexel Burnham Lambert Inc. , in particular , are likely to have difficulty shoring up their credit standing in months ahead . What worries credit-rating concerns the most is that Wall Street firms are taking long-term risks with their own <head> capital </head> via leveraged buy-out and junk bond financings . That 's a departure from their traditional practice of transferring almost all financing risks to investors . Whereas conventional securities financings are structured to be sold quickly , Wall Street 's new penchant for leveraged buy-outs and junk bonds is resulting in long-term lending commitments that stretch out for months or years . </context> </instance> polysemous (as shown by the RANDOM baseline), less similar than SemCor word sense frequency distributions (as shown by SemCor-MFS), more similar to the first sense of WN (as shown by WN-MFS), much more skewed to the first sense of the training corpus (as shown by TRAIN-MFS), and much more easy to be learned (as shown by TRAIN).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Baselines", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The evaluation presented here covers a wide range of large-scale semantic resources: WordNet (WN) (Fellbaum, 1998), eXtended WordNet (Mihalcea and Moldovan, 2001) , large collections of semantic preferences acquired from SemCor (Agirre and Martinez, 2001; Agirre and Martinez, 2002) or acquired from the BNC (McCarthy, 2001 ), large-scale Topic Signatures for each synset acquired from the web (Agirre and de la Calle, 2004) or SemCor (Landes et al., 2006) . Although these resources have been derived using different WN versions, using the technology for the automatic alignment of wordnets (Daud\u00e9 et al., 2003) , most of these resources have been integrated into a common resource called Multilingual Central Repository (MCR) (Atserias et al., 2004) maintaining the compatibility among all the knowledge resources which use a particular WN version as a sense repository. Furthermore, these mappings al-low to port the knowledge associated to a particular WN version to the rest of WN versions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 162, |
|
"text": "(Mihalcea and Moldovan, 2001)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 255, |
|
"text": "(Agirre and Martinez, 2001;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 282, |
|
"text": "Agirre and Martinez, 2002)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 424, |
|
"text": "Calle, 2004)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 456, |
|
"text": "(Landes et al., 2006)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 612, |
|
"text": "(Daud\u00e9 et al., 2003)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 751, |
|
"text": "(Atserias et al., 2004)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 323, |
|
"text": "(McCarthy, 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The current version of the MCR contains 934,771 semantic relations between synsets, most of them acquired by automatic means. This represents almost four times larger than the Princeton WordNet (245,509 unique semantic relations in WordNet 2.1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Hereinafter we will refer to each semantic resource as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "WN (Fellbaum, 1998) : This resource uses the direct relations encoded in WN1.6 or WN2.0 (for instance, tree#n#1-hyponym->teak#n#2). We also tested WN 2 (using relations at distances 1 and 2), WN 3 (using relations at distances 1 to 3) and WN 4 (using relations at distances 1 to 4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 19, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "XWN (Mihalcea and Moldovan, 2001) : This resource uses the direct relations encoded in eXtended WN (for instance, teak#n#2-gloss->wood#n#1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 33, |
|
"text": "(Mihalcea and Moldovan, 2001)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "WN+XWN: This resource uses the direct relations included in WN and XWN. We also tested (WN+XWN) 2 (using either WN or XWN relations at distances 1 and 2, for instance, tree#n#1-related->wood#n#1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "spBNC (McCarthy, 2001 ): This resource contains 707,618 selectional preferences acquired for subjects and objects from BNC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 21, |
|
"text": "(McCarthy, 2001", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "spSemCor (Agirre and Martinez, 2002) : This resource contains the selectional preferences acquired for subjects and objects from SemCor (for instance, read#v#1-tobj->book#n#1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 9, |
|
"end": 36, |
|
"text": "(Agirre and Martinez, 2002)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "MCR (Atserias et al., 2004) : This resource uses the direct relations included in MCR but excluding spBNC because of its poor performance. Thus, MCR contains the direct relations from WN (as tree#n#1-hyponym->teak#n#2), XWN (as teak#n#2-gloss->wood#n#1), and spSemCor (as read#v#1-tobj->book#n#1) but not the indi- Table 4 : Semantic relations uploaded in the MCR rect relations of (WN+XWN) 2 (tree#n#1-related->wood#n#1). We also tested MCR 2 (using relations at distances 1 and 2), which also integrates (WN+XWN) 2 relations. Table 4 shows the number of semantic relations between synset pairs in the MCR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 4, |
|
"end": 27, |
|
"text": "(Atserias et al., 2004)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 322, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 528, |
|
"end": 535, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Large scale knowledge Resources", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Topic Signatures (TS) are word vectors related to a particular topic (Lin and Hovy, 2000) . Topic Signatures are built by retrieving context words of a target topic from large corpora. In our case, we consider word senses as topics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 89, |
|
"text": "(Lin and Hovy, 2000)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For this study, we use two different large-scale Topic Signatures. The first constitutes one of the largest available semantic resource with around 100 million relations (between synsets and words) acquired from the web (Agirre and de la Calle, 2004). The second has been derived directly from SemCor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "TSWEB 2 : Inspired by the work of , these Topic Signatures were constructed using monosemous relatives from WordNet (synonyms, hypernyms, direct and indirect hyponyms, and siblings), querying Google and retrieving up to one thousand snippets per query (that is, a word sense), extracting the words with distinctive frequency using TFIDF. For these experiments, we used at maximum the first 700 words of each TS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "TSSEM: These Topic Signatures have been constructed using the part of SemCor having all words tagged by PoS, lemmatized and sense tagged according to WN1.6 totalizing 192,639 words. For each word-sense appearing in SemCor, we gather all sentences for that word sense, building a TS using TFIDF for all word-senses co-occurring in those sentences. political party#n#1 2.3219 party#n#1 2.3219 election#n#1 1.0926 nominee#n#1 0.4780 candidate#n#1 0.4780 campaigner#n#1 0.4780 regime#n#1 0.3414 identification#n#1 0.3414 government#n#1 0.3414 designation#n#3 0.3414 authorities#n#1 0.3414 Table 5 : Topic Signatures for party#n#1 obtained from Semcor (11 out of 719 total word senses)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 585, |
|
"end": 592, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In table 5, there is an example of the first wordsenses we calculate from party#n#1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The total number of relations between WN synsets acquired from SemCor is 932,008. Table 6 presents ordered by F1 measure, the performance of each knowledge resource on Senseval-3 and the average size of the TS per word-sense. The average size of the TS per word-sense is the number of words associated to a synset on average. Obviously, the best resources would be those obtaining better performances with a smaller number of associated words per synset. The best results for precision, recall and F1 measures are shown in bold. We also mark in italics those resources using non-direct relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 89, |
|
"text": "Table 6", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Topic Signatures", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Surprisingly, the best results are obtained by TSSEM (with F1 of 52.4). The lowest result is obtained by the knowledge directly gathered from WN mainly because of its poor coverage (R of 18.4 and F1 of 26.1). Also interesting, is that the knowledge integrated in the MCR although partly derived by automatic means performs much better in terms of precision, recall and F1 measures than using them separately (F1 with 18.4 points higher than WN, 9.1 than XWN and 3.7 than spSemCor).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating each resource", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Despite its small size, the resources derived from SemCor obtain better results than its counterparts using much larger corpora (TSSEM vs. TSWEB and spSemCor vs. spBNC).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating each resource", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Regarding the basic baselines, all knowledge resources surpass RANDOM, but none achieves neither WN-MFS, TRAIN-MFS nor TRAIN. Only TSSEM obtains better results than SEMCOR-MFS and is very close to the most frequent sense of WN (WN-MFS) and the training (TRAIN-MFS). Table 7 presents ordered by F1 measure, the performance of each knowledge resource on SemEval-2007 and its average size of the TS per word-sense 3 . The best results for precision, recall and F1 measures are shown in bold. We also mark in italics those resources using non-direct relations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 266, |
|
"end": 273, |
|
"text": "Table 7", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluating each resource", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Interestingly, on SemEval-2007, all the knowledge resources behave differently. Now, the best results are obtained by (WN+XWN) 2 (with F1 of 52.9), followed by TSWEB (with F1 of 51.0). The lowest result is obtained by the knowledge encoded in spBNC mainly because of its poor precision (P of 24.4 and F1 of 20.8).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating each resource", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Regarding the basic baselines, spBNC, WN (and also WN 2 and WN 4 ) and spSemCor do not surpass RANDOM, and none achieves neither WN-MFS, TRAIN-MFS nor TRAIN. Now, WN+XWN, XWN, TSWEB and (WN+XWN) 2 obtain better results than SEMCOR-MFS but far below the most frequent sense of WN (WN-MFS) and the training (TRAIN-MFS).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluating each resource", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to evaluate deeply the contribution of each knowledge resource, we also provide some results of the combined outcomes of several resources. The 3 The average size is different with respect Senseval-3 because the words selected for this task are different Table 8 : F1 fine-grained results for the 4 systemcombinations on Senseval-3 combinations are performed following a very basic strategy (Brody et al., 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 420, |
|
"text": "(Brody et al., 2006)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 271, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combination of Knowledge Resources", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Rank-Based Combination (Rank): Each semantic resource provides a ranking of senses of the word to be disambiguated. For each sense, its placements according to each of the methods are summed and the sense with the lowest total placement (closest to first place) is selected. Table 8 presents the F1 measure result with respect this method when combining four different semantic resources on the Senseval-3 test set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 282, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combination of Knowledge Resources", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Regarding the basic baselines, this combination outperforms the most frequent sense of SemCor (SEMCOR-MFS with F1 of 49.1), WN (WN-MFS with F1 of 53.0) and, the training data (TRAIN-MFS with F1 of 54.5). Table 9 presents the F1 measure result with respect the rank mthod when combining the same four different semantic resources on the SemEval-2007 test set.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 204, |
|
"end": 211, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Combination of Knowledge Resources", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Rank MCR+(WN+XWN) 2 +TSWEB+TSSEM 38.9 Table 9 : F1 fine-grained results for the 4 systemcombinations on SemEval-2007 In this case, the combination of the four resources obtains much lower result. Regarding the baselines, this combination performs lower than the most frequent senses from SEMCOR, WN or the training data. This could be due to the poor individual performance of the knowledge derived from SemCor (spSemCor, TSSEM and MCR, which integrates spSemCor). Possibly, in this case, the knowledge comming from SemCor is counterproductive. Interestingly, the knowledge derived from other sources (XWN from WN glosses and TSWEB from the web) seems to be more robust with respect corpus changes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 104, |
|
"end": 116, |
|
"text": "SemEval-2007", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 45, |
|
"text": "Table 9", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "KB", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Although this task had no participants, we provide the performances of a large set of knowledge resources on two different test sets: Senseval-3 and SemEval-2007 English Lexical Sample task. We also provide the results of a system combination of four large-scale semantic resources. When evaluated on Senseval-3, the combination of knowledge sources surpass the most-frequent classifiers. However, a completely different behaviour is observed on SemEval-2007 data test. In fact, both corpora present very different characteristics. The results show that some resources seems to be less dependant than others to corpus changes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Obviously, these results suggest that much more research on acquiring, evaluating and using largescale semantic resources should be addressed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Symmetric relations are counted only once.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ixa.si.ehu.es/Ixa/resources/ sensecorpus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We want to thank the valuable comments of the anonymous reviewers. This work has been partially supported by the projects KNOW (TIN2006-15049-C03-01) and ADIMEN (EHU06/113).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Publicly available topic signatures for all wordnet nominal senses", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lopez De La Calle", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Agirre and O. Lopez de la Calle. 2004. Publicly avail- able topic signatures for all wordnet nominal senses. In Proceedings of LREC, Lisbon, Portugal.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning class-to-class selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of CoNLL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Agirre and D. Martinez. 2001. Learning class-to-class selectional preferences. In Proceedings of CoNLL, Toulouse, France.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Integrating selectional preferences in wordnet", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of GWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Agirre and D. Martinez. 2002. Integrating selectional preferences in wordnet. In Proceedings of GWC, Mysore, India.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The meaning multilingual central repository", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Atserias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Villarejo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piek", |
|
"middle": [], |
|
"last": "Vossen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of GWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Atserias, L. Villarejo, G. Rigau, E. Agirre, J. Car- roll, B. Magnini, and Piek Vossen. 2004. The mean- ing multilingual central repository. In Proceedings of GWC, Brno, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Ensemble methods for unsupervised wsd", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Brody", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of COLING-ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Brody, R. Navigli, and M. Lapata. 2006. Ensem- ble methods for unsupervised wsd. In Proceedings of COLING-ACL, pages 97-104.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Comparing methods for automatic acquisition of topic signatures", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Cuadros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Padr\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of RANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Cuadros, L. Padr\u00f3, and G. Rigau. 2005. Comparing methods for automatic acquisition of topic signatures. In Proceedings of RANLP, Borovets, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "An empirical study for automatic acquisition of topic signatures", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Cuadros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Padr\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of GWC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Cuadros, L. Padr\u00f3, and G. Rigau. 2006. An empirical study for automatic acquisition of topic signatures. In Proceedings of GWC, pages 51-59.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Validation and Tuning of Wordnet Mapping Techniques", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Daud\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Padr\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Rigau", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of RANLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Daud\u00e9, L. Padr\u00f3, and G. Rigau. 2003. Validation and Tuning of Wordnet Mapping Techniques. In Proceed- ings of RANLP, Borovets, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "WordNet. An Electronic Lexical Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Fellbaum, editor. 1998. WordNet. An Electronic Lexi- cal Database. The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Building a semantic concordance of english. In WordNet: An electronic lexical database and some applications", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Landes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Tengi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--104", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Landes, C. Leacock, and R. Tengi. 2006. Build- ing a semantic concordance of english. In WordNet: An electronic lexical database and some applications. MIT Press, Cambridge,MA., 1998, pages 97-104.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Using Corpus Statistics and WordNet Relations for Sense Identification", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "1", |
|
"pages": "147--166", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Leacock, M. Chodorow, and G. Miller. 1998. Us- ing Corpus Statistics and WordNet Relations for Sense Identification. Computational Linguistics, 24(1):147- 166.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The automated acquisition of topic signatures for text summarization", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of COLING", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Lin and E. Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceed- ings of COLING. Strasbourg, France.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Aternations, Subcategorization Frames and Selectional Preferences", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. McCarthy. 2001. Lexical Acquisition at the Syntax- Semantics Interface: Diathesis Aternations, Subcate- gorization Frames and Selectional Preferences. Ph.D. thesis, University of Sussex.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "extended wordnet: Progress report", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Moldovan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of NAACL Workshop on WordNet and Other Lexical Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Mihalcea and D. Moldovan. 2001. extended wordnet: Progress report. In Proceedings of NAACL Workshop on WordNet and Other Lexical Resources, Pittsburgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Word Sense Disambiguation: Algorithms and applications", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Speech and Language Technology", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. Mihalcea. 2006. Knowledge based methods for word sense disambiguation. In E. Agirre and P. Edmonds (Eds.) Word Sense Disambiguation: Algorithms and applications., volume 33 of Text, Speech and Lan- guage Technology. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "EuroWordNet: A Multilingual Database with Lexical Semantic Networks", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Vossen, editor. 1998. EuroWordNet: A Multilingual Database with Lexical Semantic Networks . Kluwer Academic Publishers .", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"content": "<table><tr><td>: P, R and F1 results for English Lexical Sam-</td></tr><tr><td>ple Baselines of Senseval-3</td></tr><tr><td>SemCor MFS (SEMCOR-MFS): This method</td></tr><tr><td>selects the most frequent sense of the target word</td></tr><tr><td>in SemCor.</td></tr><tr><td>WordNet MFS (WN-MFS): This method selects</td></tr><tr><td>the first sense in WN1.6 of the target word.</td></tr><tr><td>TRAIN-MFS: This method selects the most fre-</td></tr><tr><td>quent sense in the training corpus of the target word.</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>Baselines</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>TRAIN</td><td colspan=\"3\">87.6 87.6 87.6</td></tr><tr><td>TRAIN-MFS</td><td colspan=\"3\">81.2 79.6 80.4</td></tr><tr><td>WN-MFS</td><td colspan=\"3\">66.2 59.9 62.9</td></tr><tr><td colspan=\"4\">SEMCOR-MFS 42.4 38.4 40.3</td></tr><tr><td>RANDOM</td><td colspan=\"3\">27.4 27.4 27.4</td></tr></table>", |
|
"html": null, |
|
"text": "Example of test id for capital#n which its correct sense is 1", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table/>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"content": "<table><tr><td>: P, R and F1 fine-grained results for the</td></tr><tr><td>resources evaluated individually at Senseval-03 En-</td></tr><tr><td>glish Lexical Sample Task.</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td colspan=\"2\">: P, R and F1 fine-grained results for the</td></tr><tr><td colspan=\"2\">resources evaluated individually at SemEval-2007,</td></tr><tr><td>English Lexical Sample Task .</td><td/></tr><tr><td>KB</td><td>Rank</td></tr><tr><td colspan=\"2\">MCR+(WN+XWN) 2 +TSWEB+TSSEM 55.5</td></tr></table>", |
|
"html": null, |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |