|
{ |
|
"paper_id": "S01-1014", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:35:35.265288Z" |
|
}, |
|
"title": "Supervised Sense Tagging using Support Vector Machines", |
|
"authors": [ |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Cabezas", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Maryland", |
|
"location": { |
|
"postCode": "20742", |
|
"settlement": "College Park", |
|
"region": "MD", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Maryland", |
|
"location": { |
|
"postCode": "20742", |
|
"settlement": "College Park", |
|
"region": "MD", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Jessica", |
|
"middle": [], |
|
"last": "Stevens", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Maryland", |
|
"location": { |
|
"postCode": "20742", |
|
"settlement": "College Park", |
|
"region": "MD", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe the University of Maryland's supervised sense tagger, which participated in the SENSEVAL-2 lexical sample evaluations for English, Spanish, and Swedish; we also present unofficial results for Basque. We designed a highly modular combination of language-independent feature extraction and supervised learning using support vector machines in order to permit rapid ramp-up, language independence, and capability for future expansion.", |
|
"pdf_parse": { |
|
"paper_id": "S01-1014", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe the University of Maryland's supervised sense tagger, which participated in the SENSEVAL-2 lexical sample evaluations for English, Spanish, and Swedish; we also present unofficial results for Basque. We designed a highly modular combination of language-independent feature extraction and supervised learning using support vector machines in order to permit rapid ramp-up, language independence, and capability for future expansion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The SENSEVAL-2 exercise provided an unprecedented opportunity to explore word sense disambiguation (WSD) in a common evaluation framework for a large number of languages. In past work, we have focused on unsupervised methods for English, taking advantage of the WordN et hierarchy and sometimes also selectional preferences between predicates and arguments (Resnik, 1997; Resnik, 1999) . In the current exercise, however, WordNet-like sense hierarchies were not necessarily going to be available for all languages, and the predominance of lexical selection tasks (rather than all-words tasks) suggested adopting a disambiguation approach capable of exploiting manually annotated training data. These considerations motivated a system design based on supervised learning, where senses to be predicted did not need to be treated as part of a semantic hierarchy.", |
|
"cite_spans": [ |
|
{ |
|
"start": 357, |
|
"end": 371, |
|
"text": "(Resnik, 1997;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 385, |
|
"text": "Resnik, 1999)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our design was also motivated by the role of semantic selection techniques in our longer term research agenda. In the context of our group's work on cross-language information retrieval and machine translation applications Cabezas et al., 2001) , lexical selection that is, choosing the right target-language word given a source-language word in context -is a crucial task. Because the lexical selection problem is extremely similar to sense selection, and because this was our first foray into supervised methods, we took advantage of the opportunity to construct an architecture that will support both tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 244, |
|
"text": "Cabezas et al., 2001)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the sections that follow, we lay out our system architecture, briefly summarize our SENSEVAL-2 results, and discuss our plans for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 System Architecture UMD's system follows the classic supervised learning paradigm that, for WSD, is perhaps best exemplified by Yarowsky's (1993) work. Each word in the vocabulary is considered an independent classification problem. First, annotated training instances for the ambiguous word are analyzed so that each instance can be represented as a collection of feature-value pairs labeled with the correct category. Then, these data are used for parameter estimation within a supervised learning framework in order to produce a trained classifier. Finally, the trained classifier is given previously unseen test instances and for each instance it predicts what the appropriate category label should be.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 147, |
|
"text": "Yarowsky's (1993)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We began by tokenizing all the training instances using a simple language-specific tokenizer. Features were then defined in terms of the presence of tokens either within a wide context or at a certain position to the right or left of the word being disambiguated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In detail, let T be the set of unique tokens found in the full set of training data (all training instances), plus the special token UNKNOWN, which replaces any token in test data that was never seen during training. Define F wide = T.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "A feature f E F wide will be considered present and have a non-zero value if f appears anywhere in the wide context of the word being disambiguated. For example, if we were disambiguating the word training that appears in the first sentence of this paragraph, using the entire paragraph as the wide context, then there would be non-zero values for features WE, BEGAN, and every other word in the paragraph. That is, features correspond to surrounding words. 1 Let\u00a3 = {L3,L2,Ll,Rl,R2,R3}, signifying the locations \"three tokens to the left\", \"two tokens to the left\", ... , \"three tokens to the right\", and define Fcolloc = {l:t ll E \u00a3 and t E T}. A feature l:t E Fcolloc will be considered present and have a non-zero value if token t appears at position l relative to the word being disambiguated. For example, if we were disambiguating the word training that appears in the first sentence of this section, there would be non-zero values for the features L3 : tokenizing, L 2 : all, L1: the, L1: instances, L2 :using, and L3: a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Features", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The value associated with each feature is a weight indicating how useful the feature is likely to be in disambiguation, analogous to the term weights used in representing documents as feature vectors for information retrieval.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In detail, let us designate the full feature set as F = Fwide U Fcolloc' and let N:F = j.Fj.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Clearly some features are more useful than others. For example, the feature into (word into appearing anywhere in the context) is unlikely to help distinguish among senses, although the feature R1: into (word into appearing one word to the right) might be useful for disambiguating among the senses of some verbs. In order to assign weights to features based on their likely utility, we follow a strategy similar to what is done in information retrieval, defining inverse category frequency (ICF), by analogy with inverse document frequency (IDF), as a function of how many distinct categories a feature appears with in training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Specifically, if we are disambiguating a word w with senses S = { s1, s2, ... , SNw}, then we de-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "fine ICF wU) = -log( N ~/ N w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where N ~ is the number of distinct elements of S that ever cooccur with feature f in the training data for word w. For example, if a word has five senses, and the feature L 1 :the appears in some training instance for each of the five senses, then ICFw(LI :the) = -log(5/5) = 0, correctly indicating that this feature is not at all useful for disambiguating among the five senses of this word. The lower N ~ is, the greater the value of the ICF wU) value and hence the greater weight accorded this feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Training and test instances are represented as N :F-ary feature vectors: given a training or test instance for a word w, the vector representation is defined by vw[f] = ICF wU) if f E F is present, and zero otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Weights", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Once training and test instances are represented as feature vectors, it becomes possible to exploit any number of existing supervised learning algorithms. In general, such algorithms take a set {(vbci),(v2,c2), ... ,(vN,cN)} of training instances, and produce a classifier that takes a feature vector v as input and return a distribution or confidence function over the possible categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learning Framework", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For SENSEVAL-2, we selected support vector machines (SVMs) as the supervised learning framework. We were motivated by the fact that SVMs have been shown to achieve high performance and work efficiently in environments where there are very large numbers of features, and also by the existence of a good off-theshelf implementation, SVM-Light, available for research purposes (Joachims, 1999; Joachims, 1998 In the testing phase, we convert test instances for word w into feature vectors, and we then we run these vectors through the SVM classifiers for { St, s2, ... , SNw}\u2022 For each instance, we select the sense for which the SVM classifier's response is most strongly \"yes\" (or, equivalently, most weakly \"no\"). Table 1 shows the performance of UMD's supervised sense tagger (UMD-SST) for the lexical sample tasks in four languages. The figures for English, Spanish, and Swedish are official SENSEVAL-2 results; the figures for Basque are unofficial results kindly computed by the Basque task organizers after SENSEVAL-2 because our Basque responses were not submitted in time for official evaluation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 390, |
|
"text": "(Joachims, 1999;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 405, |
|
"text": "Joachims, 1998", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 714, |
|
"end": 721, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 839, |
|
"end": 846, |
|
"text": "figures", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Learning Framework", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In general, we were quite pleased with the results, particularly since this was our first time participating in SENSEVAL. UMD-SST turned in a solid performance in comparison with the baselines and other systems, with essentially no language-specific alterations necessary other than those required for tokenization. This enabled us to participate in system evaluation for more languages than any site except JHU. We consider this a good starting point for our further investigations, which we now briefly describe.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SENSEVAL-2 Results", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Using the current system as a starting point, we are engaged in three lines of further investigation: linguistically richer contextual features, corpus-dependent expansion of feature vectors, and lexical selection via supervised learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our preliminary tests using training and development data, we experimented first with using F wide as the feature set, and obtained significant improvements when we added Fcolloc in order to capture collocations and other local contextual features. In our follow-up efforts we plan to use broad-coverage parsing to create a set of features augmented further by grammatical relations, thus capturing collocations mediated by syntactic structure. For example, although our current feature vectors could not represent the presence of the word tagger as a nearby collocate of the word describe in the abstract of this paper, syntactically richer representations of this context for the verb describe would include the feature object='tagger'. Use of syntactic collocates will require broadcoverage parsing in all the languages of interest in order to identify grammatical relations; for this we will take advantage of our other work at Maryland on bootstrapping stochastic parsers for new languages using parallel corpora (Cabezas et al., 2001 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 1021, |
|
"end": 1042, |
|
"text": "(Cabezas et al., 2001", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In our preliminary efforts we were not surprised to find that sparseness of data was a problem. Although we expect that some improvements may be obtained by collapsing across word variants -e.g. via morphological equivalence classes or stemming -we also plan to focus our efforts on semantic expansion, using document expansion techniques we have developed in our research on cross-language information retrieval . We have implemented a variant of the architecture in which training contexts are used as queries to a comparable corpus in order to retrieve related documents. The features from these documents are then added to the context representations, providing semantically enhanced feature vectors. Evaluation of this approach using SEN-SEVAL data is in progress.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our third avenue of investigation focuses on the use of our supervised WSD infrastructure to address problems of lexical selection in machine translation. Empirically, there is a close relationship between sense distinctions and patterns of lexicalization across languages (Resnik and Yarowsky, 1999) . And operationally, there is no real difference between labeling a word with a sense tag from a monolingual dictionary and labeling that word with a translation from a bilingual dictionary. Using WSD techniques for lexical selection primarily requires solving two problems. The first problem is acquisition of annotated training data, and in this case large corpora of translation-labeled words in context can be created by obtaining parallel corpora, performing word-level alignment, and labeling each word with its correspondent in the other language; this problem is already solved as part of our infrastructure for research on statistical machine translation (Cabezas et al., 2001) . The second problem is one of scalability: the approach we have described requires a separate classifer for every sense (or, now, every possible word-level translation) of every source language word. This remains an open issue, but we are optimistic about rapid developments in this area since scaling up to large vocabularies is a problem shared by everybody who wishes to use supervised WSD techniques in a broad-coverage setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 300, |
|
"text": "(Resnik and Yarowsky, 1999)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 965, |
|
"end": 987, |
|
"text": "(Cabezas et al., 2001)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Future Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "University of Maryland's sense tagger represents a classic instance of the supervised learning approach. At the same time, we have made architectural choices that promote language independence, modularity, extensibility, and scalability, and in a relatively short time period we succeeded in putting together an implementation that performs quite credibly among an impressive collection of competitors. We are encouraged by the results and we look forward to participating in further SENSEVAL exercises.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For SENSEVAL-2, we defined the surrounding context for wide contexts as being anywhere within the test instance, because instances comprised only a sentence or two. In a more general setting the context could be defined as a window of \u00b150 words, \u00b1100 words, the entire document, etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported in part by Department of Defense contract MDA90496C1250 and DARPA/ITO Cooperative Agreement N660010028910. We're very grateful to all the SENSEVAL-2 organizers and task organizers for their hard work, to Thorsten Joachims for making SVM-Light available, and to David Martinez for computing our results for Basque.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Spanish language processing at University of Maryland: Building infrastructure for multilingual applications", |
|
"authors": [ |
|
{ |
|
"first": "Clara", |
|
"middle": [], |
|
"last": "Cabezas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonnie", |
|
"middle": [], |
|
"last": "Dorr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Second International Workshop on Spanish Language Processing and Language Technologies (SLPLT-2)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Clara Cabezas, Bonnie Dorr, and Philip Resnik. 2001. Spanish language processing at Univer- sity of Maryland: Building infrastructure for multilingual applications. In Proceedings of the Second International Workshop on Span- ish Language Processing and Language Tech- nologies (SLPLT-2), Jaen, Spain, September.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Trends and controversies: Support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Hearst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "IEEE Intelligent Systems", |
|
"volume": "13", |
|
"issue": "4", |
|
"pages": "18--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marti A. Hearst. 1998. Trends and controver- sies: Support vector machines. IEEE Intelli- gent Systems, 13( 4 ):18-28.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Text categorization with support vector machines: Learning with many relevant features", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the European Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the European Conference on Machine Learning. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Making large-scale SVM learning practical", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Advances in Kernel Methods -Support Vector Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 1999. Making large-scale SVM learning practical. In B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods -Support Vector Learn- ing. MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Rapidly retargetable interactive translingual retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Gina-Anne", |
|
"middle": [], |
|
"last": "Levow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Oard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Human Language Technology Conference (HLT-2001}", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gina-Anne Levow, Douglas Oard, and Philip Resnik. 2001. Rapidly retargetable interac- tive translingual retrieval. In Human Lan- guage Technology Conference (HLT-2001}, San Diego, CA, March.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Natural Language Engineering", |
|
"volume": "5", |
|
"issue": "2", |
|
"pages": "113--133", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik and David Yarowsky. 1999. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation. Natural Language En- gineering, 5(2):113-133.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Improved cross-language retrieval using backoff translation", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Douglas", |
|
"middle": [], |
|
"last": "Oard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gina", |
|
"middle": [], |
|
"last": "Levow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Human Language Technology Conference (HLT-2001)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik, Douglas Oard, and Gina Levow. 2001. Improved cross-language retrieval us- ing backoff translation. In Human Lan- guage Technology Conference (HLT-2001), San Diego, March.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Selectional preference and sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "ANLP Workshop on Tagging Text with Lexical Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik. 1997. Selectional preference and sense disambiguation. In ANLP Work- shop on Tagging Text with Lexical Semantics, Washington, D.C., April.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Journal of Artificial Intelligence Research ( JAIR)", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "95--130", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial In- telligence Research ( JAIR), 11:95-130.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "One sense per collocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "ARPA Workshop on Human Language Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Yarowsky. 1993. One sense per colloca- tion. ARPA Workshop on Human Language Technology, March. Princeton.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": {} |
|
} |
|
} |