|
{ |
|
"paper_id": "S01-1019", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:35:32.886649Z" |
|
}, |
|
"title": "Semantic Tagging Using W ordN et Examples", |
|
"authors": [ |
|
{ |
|
"first": "Sherwood", |
|
"middle": [], |
|
"last": "Haynes", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Illinois Institute of Technology", |
|
"location": { |
|
"postCode": "60616", |
|
"region": "Illinois", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes IITl, IIT2, and IIT3, three versions of a semantic tagging system basing its sense discriminations on WordNet examples. The system uses WordNet relations aggressively, both in identifying examples of words with similar lexical constraints and matching those examples to the context.", |
|
"pdf_parse": { |
|
"paper_id": "S01-1019", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes IITl, IIT2, and IIT3, three versions of a semantic tagging system basing its sense discriminations on WordNet examples. The system uses WordNet relations aggressively, both in identifying examples of words with similar lexical constraints and matching those examples to the context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The ability of natural language understanding systems to determine the meaning of words in context has long been suggested as a necessary precursor to a deep understanding of the context (Ide and Veronis, 1998; Wilks, 1988) . Competitions such as SENSEV AL (Kilgarriff anp Palmer, 2000) and SENSEV AL-2 (SENSEV AL-2, 2001 ) model the determination of word meaning as a choice of one or more items from a fixed sense inventory, comparing a gold standard based on human judgment to the performance of computational word sense disambiguation systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 210, |
|
"text": "(Ide and Veronis, 1998;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 211, |
|
"end": 223, |
|
"text": "Wilks, 1988)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 286, |
|
"text": "(Kilgarriff anp Palmer, 2000)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 321, |
|
"text": "SENSEV AL-2 (SENSEV AL-2, 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Statistically based systems that train on tagged data have regularly performed best on these tasks (Kilgarriff and Rosenzweig, 2000) . The difficulty with these supervised systems is their insatiable need for reliable annotated data, frequently called the \"data acquisition bottleneck.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 99, |
|
"end": 132, |
|
"text": "(Kilgarriff and Rosenzweig, 2000)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The systems described here avoid the data acquisition bottleneck by using only a sense repository, or more specifically the examples and relationships contained in the sense repository.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "WordNet version 1.7 (Miller 1990; Fellbaum 1998; WordNet, 2001 ) was chosen as the sense repository for the English Lexical Sample task (where systems disambiguate a single word or collocation in context) and the English All Word task (where systems disambiguate all content words) of the SENSEV AL-2 competition. WordNet defmes a word sense (or synset) as a collection of words that can express the sense, a definition of the sense (called a gloss), zero or more examples of the use of the word sense, and a set of tuples that defme relations between synsets or synset words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 33, |
|
"text": "(Miller 1990;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 34, |
|
"end": 48, |
|
"text": "Fellbaum 1998;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 62, |
|
"text": "WordNet, 2001", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This paper describes three systems that were entered in SENSEVAL-2 competition, IITl, IIT2, and IIT3. liT 1 and IIT2 were entered in both the English All Word task and the English Lexical Sample task. IIT3 was entered in the English All Word task only. All three systems use the same unsupervised approach to determine the sense of a target word:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. for each syntactically plausible sense, fmd the set of WordNet examples that appear in that synset or a related synset. 2. for each example, compare the example to the context, scoring the quality of the match. 3. choose the sense whose synset is responsible for the inclusion of the highest scoring example. Hereafter, target words identify the words to be disambiguated (so identified by the SENSEV AL-2 task). The context identifies the text surrounding and including a target word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "General Approach", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The systems first collect a set of example sentences and phrases from WordNet for each synset matching a target word (or its canonical or collocational form).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collecting Examples of a Sense", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The set includes examples from the synset itself as well as those of related synsets. Table 1 Use ofWordNet Relations includes examples from repeated application of the relation. That is, for the hypernym relation, examples from all ancestor synsets are included. Table 2 lists the examples identified for the synset for faithful -steadfast in affection or allegiance. WordNet 1.7 displays the synset as:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 86, |
|
"end": 93, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 271, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Collecting Examples of a Sense", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "faithful (vs. unfaithful) => firm, loyal, truehearted, fast(postnominal) =>true", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collecting Examples of a Sense", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Also See-> constant#3; true#!; trustworthy#!, trusty#! This faithful synset contributes 3 examples, the see also relation contributes examples for constant, true, and trustworthy, the similarity relation contributes the examples from the firm synset and the antonym relation contributes the unfaithfUl example.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Collecting Examples of a Sense", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Each example is compared to the context. Consider the first example in Table 2 , a man constant in adherence to his ideals. Since each example contains a word being defmed, the systems consider that this word matches the target word, so constant is assumed to match faithfUl. Call this word the example anchor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 71, |
|
"end": 78, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Examples to the Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The remaining words of the example are compared to the words surrounding the target word. The comparison begins with the word to 80 Synset Words Example constant a man constant in adherence to his ideals a constant lover constant as the northern star faithful . !years of faithful service _faithful employees we do not doubt that England has a faithful !patriot in the Lord Chancellor firm, loyal, \"the true-hearted soldier ... ofTippecanoe\"truehearted, Campaign song fur William Henry Harrison; fast a firm ally loyal supporters fast friends true true believers bonded together against all who disagreed with them the story is true \"it is undesirable to believe a proposition when there is no ground whatever for supposing it true\" -B. Russell; the true meaning of the statement trustworthy a trustworthy report an experienced and trustworthy traveling companion unfaithful an unfuithfullover Table 2 Examples Relate to Synsetfaithful-steadfast in affection or allegiance the left of the example anchor followed by the word immediately to the right of the anchor, the second word to the left of the anchor, the second word to the right of the anchor, and so on. So the order of comparison of the example words is man, in, a, adherence, to, his, ideals. Each example word is compared to the unmatched context words in a similar sequence. So, for example, the example word man would first be compared to the word immediately to the left of the context word followed by the word to its left, and so on, until a match is found.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 894, |
|
"end": 901, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Examples to the Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Word matches also use the WordNet relations as described in Table I . Under parent relations, two words match if they have a common ancestor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table I", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparing Examples to the Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Other transitive closure relations generate a match if either word appears in the other's transitive closure. The words also match if there is a direct relation between the words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Comparing Examples to the Context", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Once the words of an example have been ma!ched to the context, the result is scored. The score for all systems is computed as: (Table 3) , distance di of wi from the example anchor. In IITI, di is not considered, so a penalty calculation is independent of the word position in the example. In IIT2, di reduces penalties for wi further away from the example anchor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 136, |
|
"text": "(Table 3)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "If an example anchor alignment with the context word is the only open-class match for an example, the example receives a zero score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Haynes (200 1) describes these calculations in more detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "A sense of a target word receives the maximum score of the examples related to that sense. The systems suggest the sense(s) with the highest score, with multiple senses in the response in the event of ties. (If a tie occurs because the same example was included for two senses, the other senses are eliminated, the common example is dropped from the example set of the remaining senses, and the sense scores are recomputed.) If no sense receives a score greater than zero, the first sense is chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "IIT 1 and IIT2 match a context word independent of other sense assignment decisions. TableS SENSEV AL-2 English All Word Results task only) uses the IITI scoring algorithm for target words, but limits the senses of preceding context words to the sense tags already assigned.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "3 Results Table 4 and Table 5 show the results for IITl, IIT2 and IIT3 as well as that of the Lesk Baseline (English Lexical Sample task) and the best non-corpus based system, the CRL DIMAP system.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 29, |
|
"text": "Table 4 and Table 5", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The SENSEVAL-2 (2001) website presents the complete competition results as well as the CRL DIMAP and baseline system descriptions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The IITl and IIT2 performed better than the comparable baseline system but not as well as the best system in its class. The IIT3 approach improves on the performance of IITl by using its prior annotations in tagging subsequent words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Due to time constraints, the English All Word submissions only processed the first 12% of the corpus. The recall values marked * consider only those instances attempted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring the Match", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Many of the examples in WordNet were the result of lexicographers expanding synset information to clarify sense distinctions for the annotators of the Semcor corpus (Fellbaum, 1998) . This makes a compelling argument for the use of these WordNet examples to assist in a computational disambiguating process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 181, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The examples for rare word senses could be used to provide corpus-based statistical methods with additional evidence. Such an approach should help address the knowledge acquisition bottleneck.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The implementation and results presented here do not seem to justify this optimism. There are several reasons, though, why the method should not be dismissed without further investigation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The example sets were empty for a number of the candidate word senses. When this occurred, the system constructed a pseudo example by appending the WordNet gloss to the target word. This was sufficient for most collocation senses and some noncollocation senses such as call as in calling a square dance (where the gloss includes square and dance, one of which is highly likely to occur in any use of the sense).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Others such as day as in sidereal day or turn off (gloss cause to feel intense dislike or distaste) competed at a disadvantage. \u2022 The pattern matching and scoring methods were never tuned against any corpus data. This allowed the algorithm to have few competitors in the class of untrained systems, but scoring methods relied on intuition-founded heuristics. Such tuning should improve precision and recall. \u2022 The approach was developed to be used in tandem with statistical approaches. Further research is required before its additive value can be fully assessed. IIT3 would have done better to be based on IIT2 and an approach maximizing the scores for a sentence should do even better. \u2022 The best-matching example was chosen regardless of how bad a match was involved. The system also defaulted to the first sense encountered when all examples had a zero score. Using threshold score values may well provide substantial precision improvements (at the expense of recall). \u2022 Semantic annotation of the WordNet examples should improve the results. In addition, the following programming errors affected the precision and recall results:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\u2022 The generated answers for many adjective senses (those with similarity relations)were incorrectly formatted and were therefore always scored as incorrect. For example, in the IITl entry for the English Lexical Sample, 7.1% of all annotations were incorrectly formatted. Scoring only the answers that were correctly formatted raises the course-grained precision for liT 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "to 36.7% and fme-grained precision to 26.1 %, competitive with the course-grained performance of the best non-corpus system. \u2022 No annotations were generated for target words preceded by the word to. This results in recall j precision as seen in Table 4 and Table 5 . \u2022 In a few rare cases, the system identified the incorrect example word as the example anchor. One such occurrence was the synset art, fine art and the example a fine collection of art. The system considered it an example of the fine art collocation and chose fine as the anchor.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 253, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 265, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "82", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The approach presented here does not appear to be sufficient for a stand-alone word sense disambiguation solution. Whether this method can be combined with other methods to improve their results requires further investigation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "WordNet: An Electronic Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "Christiane", |
|
"middle": [], |
|
"last": "Fellbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, ed. (1998) WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, Massachusetts", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art. Computational Linguistics, 24/1", |
|
"authors": [ |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Veronis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--40", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nancy Ide and Jean Veronis (1998) Introduction to the Special Issue on Word Sense Disambiguation: The State of the Art. Computational Linguistics, 24/1, pp. 1 -40.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Introduction to the Special Issue on SENSEVAL", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computers and the Humanities", |
|
"volume": "3411", |
|
"issue": "", |
|
"pages": "1--13", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Kilgarriff and Martha Palmer (2000) Introduction to the Special Issue on SENSEVAL. Computers and the Humanities 3411, pp. 1-13.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Framework and Results for English SENSEVAL", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Rosenzweig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Computers and the Humanities", |
|
"volume": "3411", |
|
"issue": "", |
|
"pages": "15--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Kilgarriff and Joseph Rosenzweig (2000) Framework and Results for English SENSEVAL. Computers and the Humanities 3411, pp. 15-48.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "WordNet: An On-line Lexical Database", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "International Journal of Lexicography", |
|
"volume": "3", |
|
"issue": "4", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George Miller, ed. (1990) WordNet: An On-line Lexical Database. International Journal of Lexicography, 3/4", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Lexical Ambiguity Resolution: Perspectives from", |
|
"authors": [ |
|
{ |
|
"first": "Yorick", |
|
"middle": [], |
|
"last": "Wilks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yorick Wilks (1988) Forward. In \"Lexical Ambiguity Resolution: Perspectives from", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>lists the relations</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">SENSEV AL-2 English Lexical Sample Results</td></tr><tr><td/><td>Course Grained</td><td>Fine Grained</td></tr><tr><td>System</td><td colspan=\"2\">Precision/Recall Precision/Recall</td></tr><tr><td>I!Tl All Word</td><td>29.4%129.1% ..</td><td>28.7%/28.3% ..</td></tr><tr><td>IIT2 All Word</td><td>33.5% I 33.2% *</td><td>32.8%/32.5% ..</td></tr><tr><td>IIT3 All Word</td><td>30.1%129.7%*</td><td>29.4%/29.1%*</td></tr><tr><td>Best Non-Corpus</td><td>46.0% I 46.0%</td><td>45.1%145.1%</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |