|
{ |
|
"paper_id": "Q19-1045", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:09:21.170876Z" |
|
}, |
|
"title": "Paraphrase-Sense-Tagged Sentences", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Cocos", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Pennsylvania", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Many natural language processing tasks require discriminating the particular meaning of a word in context, but building corpora for developing sense-aware models can be a challenge. We present a large resource of example usages for words having a particular meaning, called Paraphrase-Sense-Tagged Sentences (PSTS). Built on the premise that a word's paraphrases instantiate its fine-grained meanings (i.e., bug has different meanings corresponding to its paraphrases fly and microbe) the resource contains up to 10,000 sentences for each of 3 million target-paraphrase pairs where the target word takes on the meaning of the paraphrase. We describe an automatic method based on bilingual pivoting used to enumerate sentences for PSTS, and present two models for ranking PSTS sentences based on their quality. Finally, we demonstrate the utility of PSTS by using it to build a dataset for the task of hypernym prediction in context. Training a model on this automatically generated dataset produces accuracy that is competitive with a model trained on smaller datasets crafted with some manual effort.", |
|
"pdf_parse": { |
|
"paper_id": "Q19-1045", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Many natural language processing tasks require discriminating the particular meaning of a word in context, but building corpora for developing sense-aware models can be a challenge. We present a large resource of example usages for words having a particular meaning, called Paraphrase-Sense-Tagged Sentences (PSTS). Built on the premise that a word's paraphrases instantiate its fine-grained meanings (i.e., bug has different meanings corresponding to its paraphrases fly and microbe) the resource contains up to 10,000 sentences for each of 3 million target-paraphrase pairs where the target word takes on the meaning of the paraphrase. We describe an automatic method based on bilingual pivoting used to enumerate sentences for PSTS, and present two models for ranking PSTS sentences based on their quality. Finally, we demonstrate the utility of PSTS by using it to build a dataset for the task of hypernym prediction in context. Training a model on this automatically generated dataset produces accuracy that is competitive with a model trained on smaller datasets crafted with some manual effort.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Word meaning is context-dependent. Whereas lexical semantic tasks like relation prediction have been studied extensively in a non-contextual setting, applying such models to a downstream task like textual inference or question answering requires taking the full context into account. For example, it may be true that rotavirus is a type of bug, but rotavirus is not within the realm of possible answers to the question ''Which bug caused the server outage? '' Many tasks in natural language processing require discerning the meaning of polysemous words within a particular context. It can be a challenge to develop corpora for training or evaluating sense-aware models, because particular attention must be paid to making sure the distribution of instances for a given word reflects its various meanings. This paper introduces Paraphrase-Sense-Tagged Sentences (PSTS), 1 a large resource of example usages of English words having a particular meaning. Rather than assume a rigid inventory of possible senses for each word, PSTS is grounded in the idea that the many fine-grained meanings of a word are instantiated by its paraphrases. For example, the word bug has different meanings corresponding to its paraphrases fly, error, and microbe, and PSTS includes sentences where bug takes on each of these meanings (Figure 1) . Overall, the resource contains up to 10,000 sentences for each of roughly 3 million English lexical and phrasal paraphrases from the Paraphrase Database (PPDB) (Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013; Pavlick et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 457, |
|
"end": 459, |
|
"text": "''", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1485, |
|
"end": 1519, |
|
"text": "(Bannard and Callison-Burch, 2005;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1520, |
|
"end": 1546, |
|
"text": "Ganitkevitch et al., 2013;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 1547, |
|
"end": 1568, |
|
"text": "Pavlick et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1312, |
|
"end": 1322, |
|
"text": "(Figure 1)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "PSTS was compiled by automatically extracting sentences from the English side of bilingual parallel corpora using a technique inspired by bilingual pivoting (Bannard and Callison-Burch, 2005) . For instance, to find a sentence containing bug where it means fly, we select English sentences where bug is translated to the French mouche, Spanish mosca, or one of the other foreign words that bug shares as a translation with fly. Qualitative analysis of the sentences in PSTS indicates that this is a noisy process, so we implement and compare two methods for ranking sentences by the degree to which they are ''characteristic'' of their associated paraphrase meaning. When used to rank PSTS sentences, a supervised regression model trained to correlate with human judgments of sentence quality, and an unsupervised lexical substitution model (Melamud et al., 2016) lead to, Figure 1 : We assume that the fine-grained meanings of the noun bug are instantiated by its paraphrases. Example usages of bug pertaining to each paraphrase are extracted automatically via a method inspired by bilingual pivoting (Bannard and Callison-Burch, 2005 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 191, |
|
"text": "(Bannard and Callison-Burch, 2005)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 841, |
|
"end": 863, |
|
"text": "(Melamud et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1102, |
|
"end": 1135, |
|
"text": "(Bannard and Callison-Burch, 2005", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 881, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "respectively, 89% and 96% precision within the top-10 sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In Section 5 we demonstrate a use of PSTS by automatically constructing a training set for the task of hypernym prediction in context (Shwartz and Dagan, 2016; Vyas and Carpuat, 2017) . In this task, a system is presented with a pair of words and sentence-level contexts for each, and must predict whether a hypernym relation holds for that word pair in the given contexts. We automatically generate training data for this task from PSTS, creating a training set with 5 and 30 times more training instances than the two existing datasets for this task-both of which rely on manually generated resources. We train a contextual hypernym prediction model on the PSTS-derived dataset, and show that it leads to prediction accuracy that is competitive with or better than than the same model trained on the smaller training sets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 159, |
|
"text": "(Shwartz and Dagan, 2016;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 183, |
|
"text": "Vyas and Carpuat, 2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In general, there are three basic categories of techniques for generating sense-tagged corpora: manual annotation, application of supervised models for word sense disambiguation, and unsupervised methods. Manual annotation asks humans to hand-label word instances with a sense tag, assuming that the word's senses are enumerated in an underlying sense inventory (typically WordNet [Miller, 1995] ) (Edmonds and Cotton, 2001; Mihalcea et al., 2004; Petrolito and Bond, 2014) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 395, |
|
"text": "[Miller, 1995]", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 398, |
|
"end": 424, |
|
"text": "(Edmonds and Cotton, 2001;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 425, |
|
"end": 447, |
|
"text": "Mihalcea et al., 2004;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 448, |
|
"end": 473, |
|
"text": "Petrolito and Bond, 2014)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Manually sense-tagged corpora, such as SemCor (Miller et al., 1994) or OntoNotes (Weischedel et al., 2013) , can then be used to train supervised word sense disambiguation (WSD) classifiers to predict sense labels on untagged text (Ando, 2006; Zhong and Ng, 2010; Rothe and Sch\u00fctze, 2015) . Top-performing supervised WSD systems achieve roughly 74% accuracy in assigning WordNet sense labels to word instances (Ando, 2006; Rothe and Sch\u00fctze, 2015) . In shared task settings, supervised classifiers typically out-perform unsupervised WSD systems (Mihalcea et al., 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 67, |
|
"text": "(Miller et al., 1994)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 106, |
|
"text": "(Weischedel et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 231, |
|
"end": 243, |
|
"text": "(Ando, 2006;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 263, |
|
"text": "Zhong and Ng, 2010;", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 288, |
|
"text": "Rothe and Sch\u00fctze, 2015)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 422, |
|
"text": "(Ando, 2006;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 447, |
|
"text": "Rothe and Sch\u00fctze, 2015)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 568, |
|
"text": "(Mihalcea et al., 2004)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Within the set of unsupervised methods, one long-standing idea is to use foreign translations as proxies for sense labels of polysemous words (Brown et al., 1991; Dagan, 1991) . This is based on the assumption that a polysemous English word e will often have different translations into a target language, depending on the sense of e that is used. To borrow an example from Gale et al. (1992) , if the English word sentence is translated to the French peine (judicial sentence) in one context and the French phrase (syntactic sentence) in another, then the two instances in English can be tagged with appropriate sense labels based on a mapping from the French translations to the English sense inventory. This technique has been frequently applied to automatically generate sense-tagged corpora, in order to overcome the costliness of manual sense annotation (Gale et al., 1992; Dagan and Itai, 1994; Diab and Resnik, 2002; Ng et al., 2003; Chan and Ng, 2005; Apidianaki, 2009; Lefever et al., 2011) . Our approach to unsupervised sense tagging in this paper is related, but different. Like the translation proxy approach, our method relies on having bilingual parallel corpora. But in our case, the sense labels are grounded in English paraphrases, rather than in foreign translations. This means that our method does not require any manual mapping from foreign translations to an English sense inventory. It also enables us to generate sense-tagged examples using bitext over multiple pivot languages, without having to resolve sense mapping between languages.", |
|
"cite_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 162, |
|
"text": "(Brown et al., 1991;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 175, |
|
"text": "Dagan, 1991)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 392, |
|
"text": "Gale et al. (1992)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 879, |
|
"text": "(Gale et al., 1992;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 880, |
|
"end": 901, |
|
"text": "Dagan and Itai, 1994;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 902, |
|
"end": 924, |
|
"text": "Diab and Resnik, 2002;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 925, |
|
"end": 941, |
|
"text": "Ng et al., 2003;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 960, |
|
"text": "Chan and Ng, 2005;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 961, |
|
"end": 978, |
|
"text": "Apidianaki, 2009;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 979, |
|
"end": 1000, |
|
"text": "Lefever et al., 2011)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There is a close relationship between sense tagging and paraphrasing. Some research efforts assume that words have a discrete sense inventory, and they represent each word sense as a set or cluster of paraphrases (Miller, 1995; Cocos and Callison-Burch, 2016) . Other work (Melamud et al., 2015a) , including in lexical substitution Navigli, 2007, 2009) , represents the contextualized meaning of a word instance by the set of paraphrases that could be substituted for it. This paper takes the view that assuming a discrete underlying sense inventory can be too rigid for many applications; humans have notoriously low agreement in manual sense-tagging tasks (Cinkov\u00e1 et al., 2012) , and the appropriate sense granularity varies by setting. Instead, we assume a ''one paraphrase per fine-grained meaning'' model in this paper as a generalizable approach to word sense modeling. In PSTS, a word type has as many meanings as it has paraphrases, but its paraphrase-sense-tagged instances can be grouped based on a coarser sense inventory if so desired.", |
|
"cite_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 227, |
|
"text": "(Miller, 1995;", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 259, |
|
"text": "Cocos and Callison-Burch, 2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 273, |
|
"end": 296, |
|
"text": "(Melamud et al., 2015a)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 353, |
|
"text": "Navigli, 2007, 2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 681, |
|
"text": "(Cinkov\u00e1 et al., 2012)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For a paraphrase pair like coach\u2194trainer, PSTS includes a set of sentences S coach,trainer containing coach in its trainer sense (e.g., My coach cancelled the workout), and a set of sentences S coach,trainer containing trainer in its coach sense (e.g., It's just a sprain, according to her trainer). This section describes the method for enumerating sentences corresponding to a particular paraphrase pair for inclusion in PSTS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Constructing PSTS", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our method for extracting sentences for PSTS is inspired by bilingual pivoting (Bannard and Callison-Burch, 2005) , which discovers samelanguage paraphrases by ''pivoting'' over bilingual parallel corpora. Specifically, if the English phrases coach and trainer are each translated to the same Slovenian phrase trener in some contexts, this is taken as evidence that coach and trainer have approximately similar meaning. We apply this idea in reverse: to find English sentences where coach means trainer (as opposed to bus or railcar), we extract sentences from English-Slovenian parallel corpora where coach has been aligned to their shared translation trener.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 113, |
|
"text": "(Bannard and Callison-Burch, 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The starting point for extracting PSTS is the PPDB (Ganitkevitch et al., 2013; Pavlick et al., 2015) , a collection of over 80M lexical (one-word) and phrasal English paraphrase pairs. 2 Because Figure 2 : Extracting sentences containing the noun x = bug in its y = virus sense for PSTS set S xy . In Step 1, the set F xy of translations shared by bug and virus is enumerated. In Step 2, the translations f \u2208 F xy are ranked by P MI(y, f ), in order to prioritize bug's translations most 'characteristic' of its meaning in the virus sense. In Step 3, sentences where bug has been aligned to the French translation f = virus are extracted from bitext corpora and added to the set S xy . PPDB was built using the pivot method, it follows that each paraphrase pair x\u2194y in PPDB has at least one shared foreign translation. The paraphrases for a target word x are used as proxy labels for x's fine-grained senses.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 78, |
|
"text": "(Ganitkevitch et al., 2013;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 100, |
|
"text": "Pavlick et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The process for extracting PSTS sentences S x,y for x\u2194y consists of three steps: (1) finding a set F xy of shared translations for x and y, (2) prioritizing translations that are most ''characteristic'' of x's shared meaning with y, and (3) extracting sentences from bilingual parallel corpora. The process is illustrated in Figure 2 , and described in further detail below.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 325, |
|
"end": 333, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 1: Finding Shared Translations. In order to find sentences containing the English term x where it takes on its meaning as a paraphrase of y, we begin by finding the sets of foreign the same meaning, the noisy bilingual pivoting process can produce paraphrase pairs that are more loosely semantically related (i.e., meronyms, holonyms, or even antonyms). Here we take a broader definition of paraphrase to mean any pair derived from bilingual pivoting. translations for x and y, F x and F y respectively. These translations are enumerated by processing the phrase-based alignments induced between English sentences and their translations within a large, amalgamated set of English-to-foreign bitext corpora. Once the translation sets F x and F y are extracted for the individual terms, we take their intersection as the set of shared translations, F xy .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 2: Prioritizing Characteristic Translations. Our goal is to build S xy such that its sentences containing x are ''highly characteristic'' of x's shared meaning with y, and vice versa. However, not all pivot translations f \u2208 F xy produce equally characteristic sentences. For example, consider the paraphrase pair bug \u2194 worm. Their shared translation set, F bug,worm , includes the French terms ver (worm) and esp\u00e8ce (species), and the Chinese term (insect). In selecting sentences for S bug,worm , PSTS should prioritize English sentences where bug has been translated to the most characteristic translation for worm-ver-over the more general or esp\u00e8ce.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We propose using pointwise mutual information (PMI) as a measure to quantify the degree to which a foreign translation is ''characteristic'' of an English term. To avoid unwanted biases that might arise from the uneven distribution of languages present in our bitext corpora, we treat PMI as language-specific and use shorthand notation f l to indicate that f comes from language l. The PMI of English term e with foreign word f l can be computed based on the statistics of their alignment in bitext corpora:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "PMI(e, f l ) = p(e, f l ) p(e) \u2022 p(f l ) = p(f l |e) p(f l )", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The term in the numerator of the rightmost expression is the translation probability p(f l |e), which indicates the likelihood that English word e is aligned to foreign term f l in an English-l parallel corpus. Maximizing this term promotes the most frequent foreign translations for e. The term in the denominator is the likelihood of the foreign word, p(f l ). Dividing by this term down-weights the emphasis on frequent foreign words. This is especially helpful for mitigating errors due to misalignments of English words with foreign stop words or punctuation. Both p(f l |e) and p(f l ) are estimated using maximum likelihood estimates from an automatically aligned English-l parallel corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Step 3: Extracting Sentences. To extract S xy , we first order the shared translations for paraphrase pair x\u2194y, f \u2208 F xy , by decreasing P MI(y, f ). Then, for each translation f in order, we extract up to 2500 sentences from the bitext corpora where x is translated to f . This process continues until S xy reaches a maximum size of 10k sentences. Table 1 gives examples of sentences extracted for various paraphrases of the adjective hot, ordered by decreasing PMI.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 349, |
|
"end": 356, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "PSTS is extracted from the same English-toforeign bitext corpora used to generate English PPDB (Ganitkevitch et al., 2013) , consisting of over 106 million sentence pairs, and spanning 22 pivot languages. Sentences are extracted for all paraphrases with a minimum PPDBSCORE 3 threshold of at least 2.0. The threshold value serves to produce a resource corresponding to the highest-quality paraphrases in PPDB, and eliminates considerable noise. In total, sentences were extracted for over 3.3M paraphrase pairs covering nouns, verbs, adverbs, and adjectives (21 part-ofspeech tags total). Table 2 gives the total number of paraphrase pairs covered and average number of sentences per pair in each direction. Results are given by macro-level part-of-speech, where, for example, N* covers part-of-speech tags NN, NNS, NNP, and NNPS, and phrasal constituent tag NP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 122, |
|
"text": "(Ganitkevitch et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 589, |
|
"end": 596, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Sentence Extraction", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Bilingual pivoting is a noisy process (Bannard and Callison-Burch, 2005; Chan et al., 2011; Pavlick et al., 2015) . Although shared translations for each paraphrase pair were carefully selected using PMI in an attempt to mitigate noise in PSTS, the analysis of PSTS sentences that follows in this section indicates that their quality varies. Therefore, we follow the qualitative analysis by proposing and evaluating two metrics for ranking target word instances to promote those most characteristic of the associated paraphrase meaning.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 72, |
|
"text": "(Bannard and Callison-Burch, 2005;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 73, |
|
"end": 91, |
|
"text": "Chan et al., 2011;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 92, |
|
"end": 113, |
|
"text": "Pavlick et al., 2015)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PSTS Validation and Ranking", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Our primary question is whether automatically extracted PSTS sentences for a paraphrase pair truly reflect the paraphrase meaning. Specifically, for sentences like s bug where s bug \u2208 S bug,virus , does the meaning of the word bug in s bug actually reflect its shared meaning with virus?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Evaluation of PSTS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We used human judgments to investigate this question. For a pair like bug\u2194insect, annotators were presented with a sentence containing bug from S bug,insect , and asked whether bug means roughly the same thing as insect in the sentence. The annotators chose from responses yes (the meanings are roughly similar), no (the meanings are different), unclear (there is not enough con-textual information to tell), or never (these phrases never have similar meaning). We instructed annotators to ignore grammaticality in their responses, and concentrate specifically on the semantics of the paraphrase pair.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Evaluation of PSTS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Human annotation was run in two rounds, with the first round of annotation completed by NLP researchers, and the second (much larger) round completed by crowd workers via Amazon Mechanical Turk (MTurk). In the first round (done by NLP researchers), a batch of 240 sentenceparaphrase instances (covering lexical and phrasal noun, verb, adjective, and adverb paraphrases) corresponding to 40 hand-selected polysemous target words was presented to a group of 10 annotators, split into five teams of two. To encourage consistency, each pair of annotators worked together to annotate each instance. For redundancy, we also ensured that each instance was annotated separately by two pairs of researchers. In this first round, the annotators had inter-pair agreement of 0.41 Fleiss' kappa (after mapping all never and unclear answers to no), indicating weak agreement (Fleiss, 1971) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 861, |
|
"end": 875, |
|
"text": "(Fleiss, 1971)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Evaluation of PSTS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In the second round we generated 1000 sentenceparaphrase instances, and each instance was evaluated individually by seven workers on MTurk. In each MTurk assignment, we also included an instance from the first round that was annotated as unanimously yes or unanimously no by the NLP researchers in order to gauge agreement between rounds. The crowd annotators had inter-annotator agreement of 0.34 Fleiss' kappa (after mapping all never and unclear answers to no)-slightly lower than that of the NLP researchers in round 1. The crowd workers had 75% absolute agreement with the ''control'' instances inserted from the previous round.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Evaluation of PSTS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "There was weak inter-annotator agreement in both annotation rounds. To determine why, we manually examined 100 randomly selected instances that received an even or nearly even split of yes and no responses. Most of the time (71%), annotators disagreed on the boundary between ''roughly similar'' and ''different'' meanings. For example, in ''An American cannot rent a car in Canada, drive it to the USA and then return it to Canada.'', annotators were closely split on whether the target word drive had roughly similar meaning to its paraphrase guide. Another common reason for disagreement was ambiguity of the target word within the given context (13%), as in the instance ''I think some bug may have gotten in the clean room.'' (paraphrase virus). Further disagreements occurred when the target word and paraphrase were morphologically different forms of the same lemma (6%) (''...a matter which is very close to our hearts...'' with paraphrase closely). The remaining 10% of closely split instances are generally cases where annotators did not consider all possible senses of the target word and paraphrase. For example, in ''It does not look good for the intelligence agency chief'', only four of seven crowd workers said that service was an appropriate paraphrase for its synonym agency.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Evaluation of PSTS", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To quantify the overall quality of sentences in PSTS, we calculate the average human rating for each annotated instance, where no (32.1% of all annotations), never (3.9%), and unclear (2.8%) answers are mapped to the value 0, and yes answers are mapped to the value 1. The combined results of this calculation from both rounds are given in Figure 3 . Overall, the average rating is 0.61, indicating that more sentence-paraphrase instances from PSTS are judged by humans to have similar meaning than dissimilar meaning. In general, adjectives produce higher-quality PSTS sentences than the other parts of speech. For nouns and adjectives, phrasal paraphrase pairs are judged to have higher quality than lexical paraphrase pairs. For verbs and adverbs, the results are reversed.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 340, |
|
"end": 348, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Annotation Results", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "To understand why some sentences are of poor quality, we manually examine 100 randomly selected instances with average human rating below 0.3. On close inspection, we disagreed with the low rating for 25% of the sentences (which mirrors the finding of 75% absolute agreement between expert-and crowd-annotated control instances in the second round of annotation). In those cases, either the meaning of the target in context is a rare sense of the target or paraphrase (e.g., ''the appropriation is intended to cover expenses'' with paraphrase capture), or the target word is ambiguous in its context but could be construed to match the paraphrase meaning (e.g., ''We're going to treat you as a victim in the field.'' with paraphrase discuss).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Annotation Results", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "For the truly poor-quality sentences, in roughly one third of cases the suggested PPDB paraphrase for the target word is of poor quality due to misspellings (e.g., manage\u2194mange) or other Figure 3 : Human evaluation of the degree to which a PSTS sentence from S xy containing term x reflects x's shared meaning with its paraphrase y (range 0 to 1; higher scores are better).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 195, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Annotation Results", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "noise in the bilingual pivoting process. One common source of noise was mis-tagging of the target word in context, leading to a suggested paraphrase pertaining to the wrong part of speech. For example, in the sentence ''Increase in volume was accompanied by a change to an ovaloid or elongate shape'', the target elongate, which appears as an adjective, was mis-tagged as a verb, yielding the suggested but erroneous paraphrase lie.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Annotation Results", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "The remaining poor-quality sentences (roughly 50 of the 100 examined) were cases where the target word simply did not take on its shared meaning with the suggested paraphrase. Most of these occurred due to polysemous foreign translations. For example, PSTS wrongly suggests the sentence ''...to become a part of Zimbabwe's bright and positive history'' as an example of bright taking on the meaning of high-gloss. This error happens because the shared Spanish translation, brillante, can be used with both the literal and figurative senses of bright, but highgloss only matches the literal sense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Annotation Results", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Given the amount of variation in PSTS sentence quality, it would be useful to have a numeric quality estimate. In the formation of PSTS (Section 3) we used P MI(y, f ) of the English paraphrase y with the shared foreign translation f to estimate how characteristic a sentence containing English target word x is of its shared sense with y. But the Spearman correlation between PMI and the average human ratings for the annotated sentence-paraphrase instances is 0.23 (p < 0.01), indicating only weak positive correlation. Therefore, in order to enable selection within PSTS of the most characteristic sentences for each paraphrase pair for downstream tasks, we propose and evaluate two models to re-rank PSTS sentences in a way that better corresponds to human quality judgments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentence Quality Ranking", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The first ranking model is a supervised regression, trained to correlate with human quality judgments. Concretely, given a target word x, its paraphrase y, and a sentence s x \u2208 S x,y , the model predicts a score whose magnitude indicates how characteristic s x is of x's shared meaning with y. This task is formulated as ordinary least squares linear regression, where the dependent variable is the average human quality rating for a sentenceparaphrase instance, and the features are computed based on the input sentence and paraphrase pair. There are four groups, or types, of features used in the model that are computed for each paraphrase-sentence instance, (x\u2194y, s x \u2208 S x,y ): PPDB Features. Seven features from PPDB 2.0 for paraphrase pair x\u2194y are used as input to the model. These include the pair's PPDBSCORE, and translation and paraphrase probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Contextual Features. Three contextual features are designed to measure the distributional similarity between the target x and paraphrase y, as well as the substitutability of paraphrase y for the target x in the given sentence. They include the mean cosine similarity between word embeddings 4 for paraphrase y and tokens within a twoword context window of x in sentence s x ; the cosine similarity between context-masked embed-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Mean contextual similarity f (y, s x ) = w\u2208W cos(v y ,v w ) |W | AddCos (Melamud et al., 2015b) f (x, y, s x ) = |W |\u2022cos(v x ,v y )+ w\u2208W cos(v y ,v w ) 2\u2022|W |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Context-masked embedding similarity (Vyas and Carpuat, 2017) (Vyas and Carpuat, 2017) , and the AddCos lexical substitution metric where y is the substitute, x is the target, and the context is extracted from s x (Melamud et al., 2015b) (Table 3) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 36, |
|
"end": 60, |
|
"text": "(Vyas and Carpuat, 2017)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 61, |
|
"end": 85, |
|
"text": "(Vyas and Carpuat, 2017)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 213, |
|
"end": 236, |
|
"text": "(Melamud et al., 2015b)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 237, |
|
"end": 246, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "f (x, y, s x ) = cos(v x,mask , v y,mask ) v x,mask = [v x v W min ; v x v W max ; v x v W mean ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Syntactic Features. Five binary features indicate the coarse part-of-speech label assigned to paraphrase x \u2194 y (NN, VB, RB, or JJ), and whether x \u2194 y is a lexical or phrasal paraphrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Regression Model", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "The features used as input to the model training process are the 16 listed above, as well as their interactions as modeled by degree-2 polynomial combinations (153 features total). During training and validation, we apply feature selection using recursive feature elimination in cross-validation (RFECV) (Guyon et al., 2002) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 304, |
|
"end": 324, |
|
"text": "(Guyon et al., 2002)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PMI. The final feature is simply P MI(y, f ).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We train the model on the 1227 sentenceparaphrase instances that were annotated in one or both rounds of human evaluation, after ignoring instances marked as ''unclear'' by two or more workers. The quality rating for each instance is taken as the average annotator score, where no, never, and unclear answers are mapped to the value 0, and yes answers are mapped to the value 1. We refer to the predicted quality scores produced by this model as the REG(ression) score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "PMI. The final feature is simply P MI(y, f ).", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Lexical substitution (hereafter LexSub) is the task of identifying meaning-preserving substitutes for target words in context Navigli, 2007, 2009) . For example, finding valid substitutes for bug in There are plenty of places to plant a bug in her office might include microphone or listening device but not glitch. The tasks of sense tagging and LexSub are closely related, since valid substitutes for a polysemous word must adhere to the correct meaning in each instance. Indeed, early LexSub systems explicitly included sense disambiguation as part of their pipeline (McCarthy and Navigli, 2007) , and later studies have shown that performing sense disambiguation can improve the results of LexSub models and vice versa (Cocos et al., 2017; Alagi\u0107 et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 146, |
|
"text": "Navigli, 2007, 2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 570, |
|
"end": 598, |
|
"text": "(McCarthy and Navigli, 2007)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 723, |
|
"end": 743, |
|
"text": "(Cocos et al., 2017;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 744, |
|
"end": 764, |
|
"text": "Alagi\u0107 et al., 2018)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised LexSub Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "We adopt an off-the-shelf LexSub model called CONTEXT2VEC (Melamud et al., 2016) as an unsupervised sentence ranking model. CONTEXT2VEC learns word and context embeddings using a bidirectional long short-term memory model such that words and their appropriate contexts have high cosine similarity. In order to apply CONTEXT2VEC to ranking sentence-paraphrase instances, we calculate the cosine similarity between the paraphrase's CONTEXT2VEC word embedding and the context of the target word in the sentence, using a pre-trained model. 5 The resulting score is hereafter referred to as the C2V score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 80, |
|
"text": "(Melamud et al., 2016)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 537, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unsupervised LexSub Model", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "We compare the PSTS REG and C2V scoring models under two evaluation settings. First, we measure the correlation between predicted sentence scores under each model, and the average human rating for annotated sentences. Second, we compare the precision of the top-10 ranked sentences under each model based on human judgments. In the latter experiment, we also compare with a baseline LexSub-based sentence selection and ranking model in order to validate bilingual pivoting as a worthwhile sentence selection approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Model Comparison", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "To calculate correlation between C2V model rankings and human judgments, we simply generate a C2V score for each of the 1227 humanannotated sentence-paraphrase instances. For the REG model, because the same instances were used for training, we use 5-fold cross-validation to estimate model correlation. In each fold, we first run RFECV on the training portion, then train a regression model on the selected features and predict ratings for the test portion. Table 4 : Correlation (\u03c1) of REG and C2V scores with human ratings for 1227 PSTS sentenceparaphrase instances, and precision of top-1/5/10 ranked sentences as evaluated by humans.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 458, |
|
"end": 465, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ranking Model Comparison", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "ratings on held-out portions from each fold are compared to the mean annotator ratings, and Spearman correlation is calculated on the combined set of all instances. We calculate precision under each model by soliciting human judgments, via the same crowdsourcing interface used to gather sentence annotations in Section 4.1. Specifically, for each of 40 hand-picked polysemous target words t (10 each nouns, verbs, adjectives, and adverbs), we select two paraphrases p and ask workers to judge whether t takes on the meaning of p in the top-10 PSTS sentences from S t,p as ranked by REG or C2V.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Model Comparison", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "We also use top-10 precision to see how our bilingual pivoting approach for enumerating meaning-specific sentences compares to a system that enumerates sentences using a LexSub model alone, without bilingual pivoting. The baseline LexSub model selects sentences containing coach in its trainer sense by scoring trainer as a substitute for coach in a large set of candidate sentences using CONTEXT2VEC, and ranking them. We consider the union of all PSTS sentence sets containing coach, S coach, * , as candidates. The top-10 scoring sentences are evaluated by humans for precision, and compared to the ranked sets of top-10 PSTS sentences under the REG and C2V models. Results are given in Table 4 . The supervised REG model produces a higher correlation (0.40) between model scores and human ratings than does the unsupervised C2V model (0.34) or the PMI metric (0.23), indicating that REG may be preferable to use in cases where sentence quality estimation for a wide quality range is needed. Although a correlation of 0.40 is not very high, it is important to note that the correlation between each individual annotator and the mean of other annotators over all target sentence-paraphrase instances was only 0.36. Thus the model predicts the mean annotator rating with roughly the same reliability as individual annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 690, |
|
"end": 697, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ranking Model Comparison", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "For applications where it is necessary to choose only the highest-quality examples of target words with a specific paraphrase-aligned meaning, the C2V ranking of PSTS sentences is best. We found that 96% of top-10 ranked sentences under this model were evaluated by humans to be good examples of target words with the specified meaning, versus 89% for the REG model and 92% for the LexSub baseline. This indicates that the different methods for enumerating example sentencesbilingual pivoting (PSTS) and LexSub score-are complementary, and that combining the two produces the best results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ranking Model Comparison", |
|
"sec_num": "4.2.3" |
|
}, |
|
{ |
|
"text": "Finally, we aim to demonstrate that PSTS can be used to automatically construct a training dataset for the task of predicting hypernymy in context, without relying on manually annotated resources or a pre-trained word sense disambiguation model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hypernym Prediction in Context", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Most work on hypernym prediction has been done out of context: The input to the task is a pair of terms like (table, furniture), and the model predicts whether the second term is a hypernym of the first (in this case, it is). However, both Shwartz and Dagan (2016) and Vyas and Carpuat (2017) point out that hypernymy between two terms depends on their context. For example, the table mentioned in ''He set the glass down on the table'' is indeed a type of furniture, but in ''Results are reported in table 3.1'' it is not. This is the motivation for studying the task of predicting hypernymy within a given context, where the input to the problem is a pair of sentences each containing a target word, and the task is to predict whether a hypernym relationship holds between the two targets. Example task instances are in Table 5 . Previous work on this task has relied on either human annotation, or the existence of a manually constructed lexical semantic resource (i.e., WordNet), to generate training data. In the case of Shwartz and Dagan (2016) , who examined finegrained semantic relations in context, a dataset of 3,750 sentence pairs was compiled by auto-matically extracting sentences from Wikipedia containing target words of interest, and asking crowd workers to manually label sentence pairs with the appropriate fine-grained semantic relation. 6 Subsequently, Vyas and Carpuat (2017) studied hypernym prediction in context. They generated a larger dataset of 22k sentence pairs which used example sentences from WordNet as contexts, and WordNet's ontological structure to find sentence pairs where the presence or absence of a hypernym relationship could be inferred. This section builds on both previous works, in that we generate an even larger dataset of over 84k sentence pairs for studying hypernymy in context, and use the existing test sets for evaluation. However, unlike the previous methods, our dataset is constructed without any manual annotation or reliance on WordNet for contextual examples. Instead, we leverage the sense-specific contexts in PSTS to generate training instances automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 264, |
|
"text": "Shwartz and Dagan (2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 292, |
|
"text": "Vyas and Carpuat (2017)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1026, |
|
"end": 1050, |
|
"text": "Shwartz and Dagan (2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1358, |
|
"end": 1359, |
|
"text": "6", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1374, |
|
"end": 1397, |
|
"text": "Vyas and Carpuat (2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 822, |
|
"end": 829, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hypernym Prediction in Context", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Because PSTS can be used to query sentences containing target words with a particular finegrained sense, our hypothesis is that, given a set of term pairs having known potential semantic relations, we can use PSTS to automatically produce a large training set of sentence pairs for contextual hypernym prediction. More specifically, our goal is to generate training instances of the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "(t, w, c t , c w , l)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where t is a target term, w is a possibly related term, c t and c w are contexts, or sentences, containing t and w respectively, and l is a binary label indicating whether t and w are a hyponymhypernym pair in the senses as they are expressed in contexts c t and c w . The proposed method for generating such instances from PSTS relies on WordNet (or another lexical semantic resource) only insofar as we use it to enumerate term pairs (t, w) with known semantic relation; the contexts (c t , c w ) in which these relations hold or do not are generated automatically from PSTS. Table 5 : Example instances for contextual hypernym prediction, selected from the PSTS-derived dataset.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 578, |
|
"end": 585, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The training set is deliberately constructed to include instances of the following types:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "(a) Positive instances, where (t, w) hold a hypernym relationship in contexts c t and c w (l = 1) (Table 5 , example a).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "(Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "(b) Negative instances, where (t, w) hold some semantic relation other than hypernymy (such as meronymy or antonymy) in contexts c t and c w (l = 0). This will encourage the model to discriminate true hypernym pairs from other semantically related pairs (Table 5, This will encourage the model to take context into account when making a prediction (Table 5 , example c).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 263, |
|
"text": "(Table 5,", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 348, |
|
"end": 356, |
|
"text": "(Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Beginning with a target word t, the procedure for generating training instances of each type from PSTS is as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Find related terms The first step is to find related terms w such that the pair (t, w) are related in WordNet with relation type r (which could be one of synonym, antonym, hypernym, hyponym, meronym, or holonym), and t \u2194 w is a paraphrase pair present in PSTS. The related terms are not constrained to be hypernyms, in order to enable generation of instances of type (b) above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Generate contextually related instances (types (a) and (b) above). Given term pair (t, w) with known relation r, generate sentence pairs where this relation is assumed to hold as follows. First, order PSTS sentences in S tw (containing target t) and S tw (containing related term w in its sense as a paraphrase of t) by decreasing quality score. Next, choose the top-k sentences from each ordered list, and select sentence pairs (c t , c w ) \u2208 S tw \u00d7 S tw where both sentences are in their respective top-k lists. Add each sentence pair to the dataset as a positive instance (l = 1) if r = hypernym, or as a negative instance (l = 0) if r is something other than the hypernym relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Generate contextually unrelated instances (type (c) above). Given term pair (t, w) with known relation r, generate sentence pairs where this relation is assumed not to hold as follows. First, pick a confounding term t that is a paraphrase of t (i.e., t \u2194 t is in PPDB), but unrelated to w in PPDB. This confounding term is designed to represent an alternative sense of t. For example, a confounding term corresponding to the term pair (t, w) =(bug, microphone) could be glitch because it represents a sense of bug that is different from bug's shared meaning with microphone. Next, select the top-k/2 sentences containing related term w in its sense as w from S w,w in terms of quality score. Choose sentence pairs (c t , c w ) \u2208 S t,w \u00d7S w,w to form negative instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "To form the PSTS-derived contextual hypernym prediction dataset, this process is carried out for a set of 3,558 target nouns drawn from the Shwartz and Dagan (2016) and Vyas and Carpuat (2017) datasets. For each target noun, all PPDB paraphrases that are hypernyms, hyponyms, synonyms, antonyms, co-hyponyms, or meronyms from WordNet were selected as related terms. There were k = 3 sentences selected for each target/related term pair, where the PSTS sentences were ranked by the C2V model. This process resulted in a dataset of over 84k instances, of which 32% are positive contextual hypernym pairs (type (a)). The 68% of negative pairs are made up of 38% instances where t and w hold some relation other than hypernymy in context (type (b)), and 30% instances where t and w are unrelated in the given context (type (c)).", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 164, |
|
"text": "Shwartz and Dagan (2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 192, |
|
"text": "Vyas and Carpuat (2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Producing a Training Set", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In order to compare the quality of the PSTSderived contextual hypernym dataset to one produced using sentences sense-tagged by a supervised WSD model, we generate a baseline training set using word instances with senses tagged by the English all-words WSD model It Makes Sense (IMS) (Zhong and Ng, 2010) . IMS is a supervised sense tagger that uses a SVM classifier operating over syntactic and contextual features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 303, |
|
"text": "(Zhong and Ng, 2010)", |
|
"ref_id": "BIBREF37" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline IMS Training Set", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "We begin by extracting an inventory of sentences pertaining to WordNet senses using IMS. Specifically, a pre-trained, off-the-shelf version of IMS 7 is used to predict WordNet 3.0 sense labels for instances of the same target nouns present in the PSTS-derived training set. The instances are drawn from the English side of the same Englishforeign bitext used to extract PSTS, so the source corpora for the PSTS-derived and IMS contextual hypernym datasets are the same. We select the top sentences for each sense of each target noun, as ranked by IMS model confidence, as a sentence inventory for each sense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline IMS Training Set", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Next,we extract training instances (t, w, c t , c w , l) using the same procedure outlined in Section 5.1. Term pairs (t, w) are selected such that t and w have related senses in WordNet, and both t and w are within the set of target nouns. Related instances are generated from the top-3 IMS-ranked sentences for the related senses of t and w, and unrelated sentences are chosen by selecting an un-related WordNet sense of t to pair with the original sense of w, and vice versa. Finally, we truncate the resulting set of training instances to match the PSTS-derived dataset in size and instance type distribution: 84k instances total, with 32% positive (contextual hypernym) pairs, 38% contextually related non-hypernym pairs, and 30% contextually unrelated pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline IMS Training Set", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Having automatically generated a dataset from PSTS for studying hypernymy in context, the next steps are to adopt a contextual hypernym prediction model to train on the dataset, and then 7 https://www.comp.nus.edu.sg/\u223cnlp. to evaluate its performance on existing hypernym prediction test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contextual Hypernym Prediction Model", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The model adopted for predicting hypernymy in context is a fine-tuned version of the BERT pre-trained transformer model (Devlin et al., 2019) (Figure 4 ). Specifically, we use BERT in its configuration for sentence pair classification tasks, where the input consists of two tokenized sentences (c t and c w ), preceded by a [CLS] token and separated by a [SEP] token. In order to highlight the target t and related term w in each respective sentence, we surround them with left and right bracket tokens ''<'' and ''>''. The model predicts whether the sentence pair contains contextualized hypernyms or not by processing the input through a transformer encoder, and feeding the output representation of the [CLS] token through fully connected and softmax layers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 120, |
|
"end": 141, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 324, |
|
"end": 329, |
|
"text": "[CLS]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 142, |
|
"end": 151, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Contextual Hypernym Prediction Model", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To test our hypothesis that PSTS can be used to generate a large, high-quality dataset for training a contextualized hypernym prediction model, we perform experiments that compare the performance of the BERT hypernym prediction model on existing test sets after training on our PSTS dataset, versus training on on datasets built using manual resources or a supervised WSD model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We use two existing test sets for contextual hypernym prediction in our experiments. The first, abbreviated S&D-binary, is a binarized version of the fine-grained semantic relation dataset from Shwartz and Dagan (2016) . The original dataset contained five relation types, but we convert all forward entailment and flipped reverse entailment instances to positive (hypernym) instances, and the rest to negative instances. The resulting dataset has 3750 instances (18% positive and 82% negative), split into train/dev/test portions of 2630/190/930 instances, respectively. The second dataset used in our experiments is WordNet Hypernyms in Context (WHiC) from Vyas and Carpuat (2017) . It contains 22,781 instances (23% positive and 77% negative), split into train/ dev/test portions of 15716/1704/5361 instances, respectively. There are two primary differences between the WHiC and S&D-binary datasets. First, S&D-binary contains negative instances where the word pair has a semantic relation other than hypernymy in the given contexts (i.e., type (b) from Table 5) whereas WHiC does not. Second, because its sentences are extracted from Wikipedia, S&D-binary contains some instances where the meaning of a word in context is ambiguous; WHiC sentences selected from WordNet are unambiguous. Our PSTS-derived contextual hypernym prediction dataset, which contains semantically related negative instances and has some ambiguous contexts (as noted in Section 4.1.1) is more similar in nature to S&D-binary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 218, |
|
"text": "Shwartz and Dagan (2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 659, |
|
"end": 682, |
|
"text": "Vyas and Carpuat (2017)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1057, |
|
"end": 1065, |
|
"text": "Table 5)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "For both the S&D-binary and WHiC datasets, we compare results of the BERT sentence pair classification model on the test portions after finetuning on the PSTS dataset, the supervised IMS baseline dataset, the original training set, or a combination of the PSTS dataset with the original training set. In order to gauge how different the datasets are from one another, we also experiment with training on S&D-binary and testing on WHiC, and vice versa. In each case we use the dataset's original dev portion for tuning the BERT model parameters (batch size, number of epochs, and learning rate). Results are reported in terms of weighted average F-Score over the positive and negative classes, and given in Table 6 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 706, |
|
"end": 713, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In the case of S&D-binary, we find that training on the 85k-instance PSTS dataset leads to a modest improvement in test set performance of 0.6% over training on the original 2.6k-instance manually Table 6 : Performance of the BERT fine-tuned contextual hypernym prediction model on two existing test sets, segmented by training set. All results are reported in terms of weighted average F1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 197, |
|
"end": 204, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "annotated training set. Combining the PSTS and original training sets leads to a 4.2% relative performance improvement over training on the original dataset alone, and outperforms the IMS baseline built using a supervised WSD system. However, on the WHiC dataset, it turns out that training on the PSTS dataset as opposed to the original 15.7k-instance WHiC training set leads to a relative 6.7% drop in performance. But training the model on the PSTS training data leads to better performance on WHiC than training on instances produced using the output of the supervised IMS WSD system, or from training on S&D-binary. It is not surprising that the PSTS-derived training set performs better on the S&D-binary test set than it does on the WHiC test set, given the more similar composition between PSTS and S&D-binary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "We present PSTS, a resource of up to 10k English sentence-level contexts for each of over 3M paraphrase pairs. The sentences were enumerated using a variation of bilingual pivoting (Bannard and Callison-Burch, 2005) , which assumes that an English word like bug takes on the meaning of its paraphrase fly in sentences where it is translated to a shared foreign translation like mouche (fr).", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 215, |
|
"text": "(Bannard and Callison-Burch, 2005)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Human assessment of the resource shows that sentences produced by this automated process have varying quality, so we propose two methods to rank sentences by how well they reflect the meaning of the associated paraphrase pair. A supervised regression model has higher overall correlation (0.4) with human sentence quality judgments, whereas an unsupervised ranking method based on lexical substitution produces highest precision (96%) for the top-10 ranked sentences. We leveraged PSTS to automatically produce a contextualized hypernym prediction training set, without the need for a supervised sense tagging model or existing hand-crafted lexical semantic resources. To evaluate this training set, we adopted a hypernym prediction model based on the BERT transformer (Devlin et al., 2019) . We showed that this model, when trained on the large PSTS training set, achieves a slight gain of 0.6% accuracy relative to training on a smaller, manually annotated training set, without the need for manual annotations. This suggests that it is worth exploring the use of PSTS to generate sense-specific datasets for other contextualized tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 769, |
|
"end": 790, |
|
"text": "(Devlin et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://psts.io.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that although the term paraphrase is generally used to denote different words or phrases with approximately", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The PPDBSCORE is a supervised metric trained to correlate with human judgments of paraphrase quality(Pavlick et al., 2015).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For computing all contextual features, we used 300dimensional skip-gram embeddings(Mikolov et al., 2013) trained on the Annotated Gigaword corpus(Napoles et al., 2012).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://u.cs.biu.ac.il/\u223cnlp/resources/ downloads/context2vec/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this study, which included the relations equivalence, forward and reverse entailment, negation/alternation, otherrelated, and independence, hyponym-hypernym pairs were labeled as forward entailment and hypernym-hyponym pairs labeled as reverse entailment instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We are grateful for support from the Allen Institute for Artificial Intelligence (AI2) Key Scientific Challenges program and the Google Ph.D. Fellowship program. This work was also supported by DARPA under the LORELEI program (HR0011-15-C-0115). The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government.We especially thank our anonymous reviewers for their thoughtful, substantive, and constructive comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Leveraging lexical substitutes for unsupervised word sense induction", |
|
"authors": [ |
|
{ |
|
"first": "Domagoj", |
|
"middle": [], |
|
"last": "Alagi\u0107", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan\u0161najder", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5004--5011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Domagoj Alagi\u0107, Jan\u0160najder, and Sebastian Pad\u00f3. 2018. Leveraging lexical substitutes for unsupervised word sense induction. In Pro- ceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pages 5004-5011, New Orleans, LA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Applying alternating structure optimization to word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Ando", |
|
"middle": [], |
|
"last": "Rie Kubota", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rie Kubota Ando. 2006. Applying alternating structure optimization to word sense disam- biguation. In Proceedings of the Tenth Con- ference on Computational Natural Language Learning (CoNLL), pages 77-84, New York, NY.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Data-driven semantic analysis for multilingual WSD and lexical selection in translation", |
|
"authors": [ |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "77--85", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marianna Apidianaki. 2009. Data-driven semantic analysis for multilingual WSD and lexical selection in translation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 77-85, Athens.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Paraphrasing with bilingual parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "597--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (ACL), pages 597-604, Ann Arbor, MI.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Word-sense disambiguation using statistical methods", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"A Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "264--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Word-sense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL), pages 264-270, Berkeley, CA.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Reranking bilingually extracted paraphrases using monolingual distributional similarity", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Tsz Ping Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--42", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsz Ping Chan, Chris Callison-Burch, and Benjamin Van Durme. 2011, July. Reranking bilingually extracted paraphrases using mono- lingual distributional similarity. In Proceedings of the GEMS 2011 Workshop on GEometri- cal Models of Natural Language Semantics, pages 33-42, Edinburgh.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Scaling up word sense disambiguation via parallel texts", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Seng Chan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1037--1042", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2005. Scal- ing up word sense disambiguation via parallel texts. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI), pages 1037-1042, Pittsburgh, PA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Managing uncertainty in semantic tagging", |
|
"authors": [ |
|
{ |
|
"first": "Silvie", |
|
"middle": [], |
|
"last": "Cinkov\u00e1", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Holub", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Kr\u00ed\u017e", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "840--850", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silvie Cinkov\u00e1, Martin Holub, and Vincent Kr\u00ed\u017e. 2012. Managing uncertainty in semantic tagging. In Proceedings of the 13th Conference of the European Chapter of the Associa- tion for Computational Linguistics (EACL), pages 840-850, Avignon.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Word sense filtering improves embedding-based lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Cocos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marianna", |
|
"middle": [], |
|
"last": "Apidianaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "110--119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Cocos, Marianna Apidianaki, and Chris Callison-Burch. 2017. Word sense filtering im- proves embedding-based lexical substitution. In Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications, pages 110-119, Valencia.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Clustering paraphrases by word sense", |
|
"authors": [ |
|
{ |
|
"first": "Anne", |
|
"middle": [], |
|
"last": "Cocos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1463--1472", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anne Cocos and Chris Callison-Burch. 2016. Clustering paraphrases by word sense. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL- HLT), pages 1463-1472, San Diego, CA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Lexical disambiguation: sources of information and their statistical realization", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ido Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "341--342", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan. 1991. Lexical disambiguation: sources of information and their statistical realization. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics (ACL), pages 341-342, Berkeley, CA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Word sense disambiguation using a second language monolingual corpus", |
|
"authors": [ |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alon", |
|
"middle": [], |
|
"last": "Itai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Computational Linguistics", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "563--596", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language mo- nolingual corpus. Computational Linguistics, 20(4):563-596.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "BERT: Pretraining of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre- training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, MN.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "An unsupervised method for word sense tagging using parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "255--262", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mona Diab and Philip Resnik. 2002. An unsu- pervised method for word sense tagging us- ing parallel corpora. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), pages 255-262, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "SENSEVAL-2: overview", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Edmonds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Cotton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philip Edmonds and Scott Cotton. 2001. SENSEVAL-2: overview. In Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems, pages 1-5, Toulouse.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Measuring nominal scale agreement among many raters", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Fleiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1971, |
|
"venue": "Psychological Bulletin", |
|
"volume": "76", |
|
"issue": "5", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Using bilingual materials to develop word sense disambiguation methods", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Gale", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Yarowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "101--112", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William A. Gale, Kenneth W. Church, and David Yarowsky. 1992. Using bilingual materials to develop word sense disambiguation methods. In Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation, pages 101-112, Montr\u00e9al.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "PPDB: The Paraphrase Database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Para- phrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies (NAACL- HLT), pages 758-764, Atlanta, GA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Gene selection for cancer classification using support vector machines", |
|
"authors": [ |
|
{ |
|
"first": "Isabelle", |
|
"middle": [], |
|
"last": "Guyon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Barnhill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Vapnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Machine Learning", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "389--422", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Isabelle Guyon, Jason Weston, Stephen Barnhill, and Vladimir Vapnik. 2002. Gene selection for cancer classification using support vector ma- chines. Machine Learning, 46(1-3):389-422.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "ParaSense or how to use parallel corpora for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Els", |
|
"middle": [], |
|
"last": "Lefever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V\u00e9ronique", |
|
"middle": [], |
|
"last": "Hoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martine", |
|
"middle": [ |
|
"De" |
|
], |
|
"last": "Cock", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL): Short Papers", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "317--322", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Els Lefever, V\u00e9ronique Hoste, and Martine De Cock. 2011. ParaSense or how to use par- allel corpora for word sense disambiguation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL): Short Papers-Volume 2, pages 317-322, Portland, OR.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "SemEval-2007 Task 10: English lexical substitution task", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "48--53", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy and Roberto Navigli. 2007. SemEval-2007 Task 10: English lexical sub- stitution task. In Proceedings of the 4th Inter- national Workshop on Semantic Evaluations (SemEval-2007), pages 48-53, Prague.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The English lexical substitution task. Language Resources and Evaluation Special Issue on Computational Semantic Analysis of Language: SemEval-2007 and Beyond", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Navigli", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "43", |
|
"issue": "", |
|
"pages": "139--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy and Roberto Navigli. 2009. The English lexical substitution task. Language Resources and Evaluation Special Issue on Computational Semantic Analysis of Language: SemEval-2007 and Beyond, 43(2):139-159.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Modeling word meaning in context with substitute vectors", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Melamud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Goldberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "472--482", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015a. Modeling word meaning in context with substitute vectors. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 472-482.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "context2vec: Learning generic context embedding with bidirectional LSTM", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Melamud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Goldberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning (CONLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic con- text embedding with bidirectional LSTM. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning (CONLL), pages 51-61, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A simple word embedding model for lexical substitution", |
|
"authors": [ |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Melamud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--7", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Oren Melamud, Omer Levy, and Ido Dagan. 2015b. A simple word embedding model for lexical substitution. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 1-7, Denver, CO.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "The SENSEVAL-3 english lexical sample task", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Chklovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea, Timothy Chklovski, and Adam Kilgarriff. 2004. The SENSEVAL-3 english lexical sample task. In Proceedings of SENSEVAL-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 25-28, Barcelona.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Informa- tion Processing Systems 26, Lake Tahoe, NV.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "WordNet: A lexical database for English", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39-41.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Using a semantic concordance for sense identification", |
|
"authors": [ |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shari", |
|
"middle": [], |
|
"last": "Landes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claudia", |
|
"middle": [], |
|
"last": "Leacock", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Thomas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Human Language Technology: Proceedings of a Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "240--243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identification. In Human Language Technol- ogy: Proceedings of a Workshop, pages 240-243, Plainsboro, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Annotated Gigaword", |
|
"authors": [ |
|
{ |
|
"first": "Courtney", |
|
"middle": [], |
|
"last": "Napoles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Gormley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "95--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Giga- word. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC- WEKEX), pages 95-100, Montr\u00e9al.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Exploiting parallel texts for word sense disambiguation: An empirical study", |
|
"authors": [ |
|
{ |
|
"first": "Bin", |
|
"middle": [], |
|
"last": "Hwee Tou Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yee Seng", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "455--462", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hwee Tou Ng, Bin Wang, and Yee Seng Chan. 2003. Exploiting parallel texts for word sense disambiguation: An empirical study. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages 455-462, Sapporo.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification", |
|
"authors": [ |
|
{ |
|
"first": "Ellie", |
|
"middle": [], |
|
"last": "Pavlick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL)", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "425--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellie Pavlick,Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison- Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Pro- ceedings of the 53rd Annual Meeting of the As- sociation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL) (Volume 2: Short Papers), pages 425-430, Beijing.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "A survey of WordNet annotated corpora", |
|
"authors": [ |
|
{ |
|
"first": "Tommaso", |
|
"middle": [], |
|
"last": "Petrolito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Francis", |
|
"middle": [], |
|
"last": "Bond", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Seventh Global WordNet Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "236--245", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tommaso Petrolito and Francis Bond. 2014. A survey of WordNet annotated corpora. In Proceedings of the Seventh Global WordNet Conference, pages 236-245, Tartu.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Autoextend: Extending word embeddings to embeddings for synsets and lexemes", |
|
"authors": [ |
|
{ |
|
"first": "Sascha", |
|
"middle": [], |
|
"last": "Rothe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1793--1803", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sascha Rothe and Hinrich Sch\u00fctze. 2015. Auto- extend: Extending word embeddings to embed- dings for synsets and lexemes. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (ACL) -Volume 1: Long Papers, pages 1793-1803, Beijing.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Adding context to semantic data-driven paraphrasing", |
|
"authors": [ |
|
{ |
|
"first": "Vered", |
|
"middle": [], |
|
"last": "Shwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "108--113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vered Shwartz and Ido Dagan. 2016. Adding context to semantic data-driven paraphrasing. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 108-113, Berlin.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Detecting asymmetric semantic relations in context: A case study on hypernymy detection", |
|
"authors": [ |
|
{ |
|
"first": "Yogarshi", |
|
"middle": [], |
|
"last": "Vyas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marine", |
|
"middle": [], |
|
"last": "Carpuat", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yogarshi Vyas and Marine Carpuat. 2017. Detect- ing asymmetric semantic relations in con- text: A case study on hypernymy detection. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM), pages 33-43, Vancouver.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "OntoNotes release 5.0 LDC2013T19. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Ralph", |
|
"middle": [], |
|
"last": "Weischedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mitchell", |
|
"middle": [], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Pradhan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lance", |
|
"middle": [], |
|
"last": "Ramshaw", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ann", |
|
"middle": [], |
|
"last": "Taylor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Kaufman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes release 5.0 LDC2013T19. Linguistic Data Consortium, Philadelphia, PA.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "It Makes Sense: A wide-coverage word sense disambiguation system for free text", |
|
"authors": [ |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hwee Tou", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the ACL 2010 System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "78--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It Makes Sense: A wide-coverage word sense disam- biguation system for free text. In Proceed- ings of the ACL 2010 System Demonstrations, pages 78-83, Uppsala.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "c t : People believe my folderol because I wear a black tuxedo. c w : The back is crudely constructed and is probably an addition for fancy dress. Yes (b) defendant plaintiff c t : The plaintiff had sued the defendant for defamation. c w : The court found that the plaintiff had made sufficiently full disclosure. No (c) bug microphone c t : An address error usually indicates a software bug. c w : You have to bring the microphone to my apartment. No", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "example b shows an antonym pair in context). (c) Negative instances, where (t, w) hold a known semantic relation, including possibly hypernymy, in some sense, but the contexts c t and c w are not indicative of this relation (l = 0).", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "The contextual hypernym prediction model is based on BERT(Devlin et al., 2019). Input sentences c t and c w are tokenized, prepended with a [CLS] token, and separated by a [SEP] token. The target word t in the first sentence, c t , and the related word w in the second sentence, c w , are surrounded by < and > tokens. The class label (hypernym or not) is predicted by feeding the output representation of the [CLS] token through fully-connected and softmax layers.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "Example PSTS sentence segments for the adjective x = hot as a paraphrase of y \u2208 {warm, spicy, popular}. For each example, the pivot translation f is given along with its translation probability p(f |y), foreign word probability p(f ), and PMI(y, f ).", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"text": "POS Paraphrase pairs Mean |S xy | Median |S xy |", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>N*</td><td>1.8M</td><td>856</td><td>75</td></tr><tr><td>V*</td><td>1.1M</td><td>972</td><td>54</td></tr><tr><td>R*</td><td>0.1M</td><td>1385</td><td>115</td></tr><tr><td>J*</td><td>0.3M</td><td>972</td><td>72</td></tr><tr><td>Total</td><td>3.3M</td><td>918</td><td>68</td></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"text": "Number of paraphrase pairs and sentences in PSTS by macro-level part of speech (POS). The number of sentences per pair is capped at 10k in each direction.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"text": "Contextual features used for sentence quality prediction, given paraphrase pair x\u2194y and sentence s x \u2208 S x,y . W contains words within a two-token context window of x in s x . v x is the word embedding for x. v W are vectors composed of the column-wise min/max/mean of embeddings for w \u2208 W . The symbol denotes element-wise multiplication.", |
|
"type_str": "table", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |