|
{ |
|
"paper_id": "S12-1025", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:24:38.956678Z" |
|
}, |
|
"title": "Modelling selectional preferences in a lexical hierarchy", |
|
"authors": [ |
|
{ |
|
"first": "Diarmuid\u00f3", |
|
"middle": [], |
|
"last": "S\u00e9aghdha", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Computer Laboratory University of Cambridge Cambridge", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Cambridge Cambridge", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes Bayesian selectional preference models that incorporate knowledge from a lexical hierarchy such as WordNet. Inspired by previous work on modelling with WordNet, these approaches are based either on \"cutting\" the hierarchy at an appropriate level of generalisation or on a \"walking\" model that selects a path from the root to a leaf. In an evaluation comparing against human plausibility judgements, we show that the models presented here outperform previously proposed comparable WordNet-based models, are competitive with state-of-the-art selectional preference models and are particularly wellsuited to estimating plausibility for items that were not seen in training.", |
|
"pdf_parse": { |
|
"paper_id": "S12-1025", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes Bayesian selectional preference models that incorporate knowledge from a lexical hierarchy such as WordNet. Inspired by previous work on modelling with WordNet, these approaches are based either on \"cutting\" the hierarchy at an appropriate level of generalisation or on a \"walking\" model that selects a path from the root to a leaf. In an evaluation comparing against human plausibility judgements, we show that the models presented here outperform previously proposed comparable WordNet-based models, are competitive with state-of-the-art selectional preference models and are particularly wellsuited to estimating plausibility for items that were not seen in training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The concept of selectional preference captures the intuitive fact that predicates in language have a better semantic \"fit\" for certain arguments than others. For example, the direct object argument slot of the verb eat is more plausibly filled by a type of food (I ate a pizza) than by a type of vehicle (I ate a car), while the subject slot of the verb laugh is more plausibly filled by a person than by a vegetable. Human language users' knowledge about selectional preferences has been implicated in analyses of metaphor processing (Wilks, 1978) and in psycholinguistic studies of comprehension (Rayner et al., 2004) . In Natural Language Processing, automatically acquired preference models have been shown to aid a number of tasks, including semantic role labelling (Zapirain et al., 2009) , parsing (Zhou et al., 2011) and lexical disambiguation (Thater et al., 2010; \u00d3 S\u00e9aghdha and Korhonen, 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 535, |
|
"end": 548, |
|
"text": "(Wilks, 1978)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 598, |
|
"end": 619, |
|
"text": "(Rayner et al., 2004)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 771, |
|
"end": 794, |
|
"text": "(Zapirain et al., 2009)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 805, |
|
"end": 824, |
|
"text": "(Zhou et al., 2011)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 852, |
|
"end": 873, |
|
"text": "(Thater et al., 2010;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 874, |
|
"end": 904, |
|
"text": "\u00d3 S\u00e9aghdha and Korhonen, 2011)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "It is tempting to assume that with a large enough corpus, preference learning reduces to a simple language modelling task that can be solved by counting predicate-argument co-occurrences. Indeed, Keller and Lapata (2003) show that relatively good performance at plausibility estimation can be attained by submitting queries to a Web search engine. However, there are many scenarios where this approach is insufficient: for languages and language domains where Web-scale data is unavailable, for predicate types (e.g., inference rules or semantic roles) that cannot be retrieved by keyword search and for applications where accurate models of rarer words are required.\u00d3 S\u00e9aghdha (2010) shows that the Webbased approach is reliably outperformed by more complex models trained on smaller corpora for less frequent predicate-argument combinations. Models that induce a level of semantic representation, such as probabilistic latent variable models, have a further advantage in that they can provide rich structured information for downstream tasks such as lexical disambiguation (\u00d3 S\u00e9aghdha and Korhonen, 2011) and semantic relation mining (Yao et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 220, |
|
"text": "Keller and Lapata (2003)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 669, |
|
"end": 684, |
|
"text": "S\u00e9aghdha (2010)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1078, |
|
"end": 1106, |
|
"text": "S\u00e9aghdha and Korhonen, 2011)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1136, |
|
"end": 1154, |
|
"text": "(Yao et al., 2011)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Recent research has investigated the potential of Bayesian probabilistic models such as Latent Dirichlet Allocation (LDA) for modelling selectional preferences (\u00d3 S\u00e9aghdha, 2010; Ritter et al., 2010; Reisinger and Mooney, 2011) . These models are flexible and robust, yielding superior performance compared to previous approaches. In this paper we present a preliminary study of analogous models that make use of a lexical hierarchy (in our case the WordNet hierarchy). We describe two broad classes of probabilistic models over WordNet and how they can be implemented in a Bayesian framework. The two main potential advantages of incorporating WordNet information are: (a) improved predictions about rare and out-of-vocabulary arguments; (b) the ability to perform syntactic word sense disambiguation with a principled probabilistic model and without the need for an additional step that heuristically maps latent variables onto Word-Net senses. Focussing here on (a), we demonstrate that our models attain better performance than previously-proposed WordNet-based methods on a plausibility estimation task and are particularly wellsuited to estimating plausibility for arguments that were not seen in training and for which LDA cannot make useful predictions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 160, |
|
"end": 178, |
|
"text": "(\u00d3 S\u00e9aghdha, 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 199, |
|
"text": "Ritter et al., 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 227, |
|
"text": "Reisinger and Mooney, 2011)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The WordNet lexical hierarchy (Fellbaum, 1998) is one of the most-used resources in NLP, finding many applications in both the definition of tasks (e.g. the SENSEVAL/SemEval word sense disambiguation tasks) and in the construction of systems. The idea of using WordNet to define selectional preferences was first implemented by Resnik (1993) , who proposed a measure of associational strength between a semantic class s and a predicate p corresponding to a relation type r:", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 46, |
|
"text": "(Fellbaum, 1998)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 341, |
|
"text": "Resnik (1993)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "A(s, p, r) = 1 \u03b7 P (s|p, r) log 2 P (s|p, r) P (s|r)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where \u03b7 is a normalisation term. This measure captures the degree to which the probability of seeing s given the predicate p differs from the prior probability of s. Given that we are often interested in the preference of p for a word w rather than a class and words generally map onto multiple classes, Resnik suggests calculating A(s, p, r) for all classes that could potentially be expressed by w and predicting the maximal value. Cut-based models assume that modelling the selectional preference of a predicate involves finding the right \"level of generalisation\" in the WordNet hierarchy. For example, the direct object slot of the verb eat can be associated with the subhierarchy rooted at the synset food#n#1, as all hyponyms of that synset are assumed to be edible and the immediate hypernym of the synset, substance#n#1, is too general given that many substances are rarely eaten. 1 This leads to the notion of \"cutting\" the hierarchy at one or more positions (Li and Abe, 1998) . The modelling task then becomes that of finding the cuts that are maximally general without overgeneralising. Li and Abe (1998) propose a model in which the appropriate cut c is selected according to the Minimum Description Length principle; this principle explicitly accounts for the trade-off between generalisation and accuracy by minimising a sum of model description length and data description length. The probability of a predicate p taking as its argument an synset s is modelled as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 969, |
|
"end": 987, |
|
"text": "(Li and Abe, 1998)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1100, |
|
"end": 1117, |
|
"text": "Li and Abe (1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P la (s|p, r) = P (s|c s,p,r )P (c|p)", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where c s,p,r is the portion of the cut learned for p that dominates s. The distribution P (s|c s,p,r ) is held to be uniform over all synsets dominated by c s,p,r , while P (c|p) is given by a maximum likelihood estimate. Clark and Weir (2002) present a model that, while not explicitly described as cut-based, likewise seeks to find the right level of generalisation for an observation. In this case, the hypernym at which to \"cut\" is chosen by a chi-squared test: if the aggregate preference of p for classes in the subhierarchy rooted at c differs significantly from the individual preferences of p for the immediate children of c, the hierarchy is cut below c. The probability of p taking a synset s as its argument is given by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 223, |
|
"end": 244, |
|
"text": "Clark and Weir (2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P cw (s|p, r) = P (p|c s,p,r , r) P (s|r) P (p|r) s \u2208S P (p|c s ,p,r , r) P (s |r) P (p|r)", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "where c s,p,r is the root node of the subhierarchy containing s that was selected for p. An alternative approach to modelling with Word-Net uses its hierarchical structure to define a Markov model with transitions from senses to senses and from senses to words. The intuition here is that each observation is generated by a \"walk\" from the root of the hierarchy to a leaf node and emitting the word corresponding to the leaf. Abney and Light (1999) proposed such a model for selectional preferences, trained via EM, but failed to achieve competitive performance on a pseudodisambiguation task. Much recent work on preference learning has focused on purely distributional methods that do not use a predefined hierarchy but learn to make generalisations about predicates and arguments from corpus observations alone. These methods can be vectorbased (Erk et al., 2010; Thater et al., 2010) , discriminative (Bergsma et al., 2008) or probabilistic (\u00d3 S\u00e9aghdha, 2010; Ritter et al., 2010; Reisinger and Mooney, 2011) . In the probabilistic category, Bayesian models based on the \"topic modelling\" framework (Blei et al., 2003b) have been shown to achieve state-of-the-art performance in a number of evaluation settings; the models considered in this paper are also related to this framework.", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 448, |
|
"text": "Abney and Light (1999)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 866, |
|
"text": "(Erk et al., 2010;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 867, |
|
"end": 887, |
|
"text": "Thater et al., 2010)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 905, |
|
"end": 927, |
|
"text": "(Bergsma et al., 2008)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 945, |
|
"end": 963, |
|
"text": "(\u00d3 S\u00e9aghdha, 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 964, |
|
"end": 984, |
|
"text": "Ritter et al., 2010;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 985, |
|
"end": 1012, |
|
"text": "Reisinger and Mooney, 2011)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1103, |
|
"end": 1123, |
|
"text": "(Blei et al., 2003b)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In machine learning, researchers have proposed a variety of topic modelling methods where the latent variables are arranged in a hierarchical structure (Blei et al., 2003a; Mimno et al., 2007) . In contrast to the present work, these models use a relatively shallow hierarchy (e.g., 3 levels) and any hierarchy node can in principle emit any vocabulary item; they thus provide a poor match for our goal of modelling over WordNet. Boyd-Graber et al. (2007) describe a topic model that is directly influenced by Abney and Light's Markov model approach; this model (LDAWN) is described further in Section 3.3 below. Reisinger and Pa\u015fca (2009) investigate Bayesian methods for attaching attributes harvested from the Web at an appropriate level in the WordNet hierarchy; this task is related in spirit to the preference learning task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 172, |
|
"text": "(Blei et al., 2003a;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 173, |
|
"end": 192, |
|
"text": "Mimno et al., 2007)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 430, |
|
"end": 455, |
|
"text": "Boyd-Graber et al. (2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 613, |
|
"end": 639, |
|
"text": "Reisinger and Pa\u015fca (2009)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Probabilistic modelling over WordNet", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We assume that we have a lexical hierarchy in the form of a directed acyclic graph G = (S, E) where each node (or synset) s \u2208 S is associated with a set of words W n belonging to a large vocabulary V . Each edge e \u2208 E leads from a node n to its children (or hyponyms) Ch n . As G is a DAG, a node may have more than one parent but there are no cycles. The ultimate goal is to learn a distribution over the argument vocabulary V for each predicate p in a set P , through observing predicate-argument pairs. A predicate is understood to correspond to a pairing of a lexical item v and a relation type r, for example p = (eat, direct object). The list of observations for a predicate p is denoted by Observations(p).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Model 1 Generative story for WN-CUT", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for cut c \u2208 {1 . . . |C|} do \u03a6 c \u223c M ultinomial(\u03b2 c ) end for for predicate p \u2208 {1 . . . |P |} do \u03b8 p \u223c Dirichlet(\u03b1) for argument instance i \u2208 Observations(p) do c i \u223c M ultinomial(\u03b8 p ) w i \u223c M ultinomial(\u03a6 c i ) end for end for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The first model we consider, WN-CUT, is directly inspired by Li and Abe's model (2). Each predicate p is associated with a distribution over \"cuts\", i.e., complete subgraphs of G rooted at a single node and containing all nodes dominated by the root. It follows that the number of possible cuts is the same as the number of synsets. Each cut c is associated with a non-uniform distribution over the set of words W c that can be generated by the synsets contained in c. As well as the use of a non-uniform emission distribution and the placing of Dirichlet priors on the multinomial over cuts, a significant difference from Li and Abe's model is that overlapping cuts are permitted (indeed, every cut has non-zero probability given a predicate). For example, the model may learn that the direct object slot of eat gives high probability to the cut rooted at food#n#1 but also that the cut rooted at substance#n#1 has non-negligible probability, higher than that assigned to phenomenon#n#1. It follows that the estimated probability of p taking argument w takes into account all possible cuts, weighted by their probability given p.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The generative story for WN-CUT is given in Algorithm 1; this describes the assumptions made by the model about how a corpus of observations is generated. The probability of predicate p taking argument w is defined as (4); an empirical posterior estimate of this quantity can be computed from a Gibbs sampling state via (5):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P (w|p) = c P (c|p)P (w|c) (4) \u221d c f cp + \u03b1 f \u2022p + |C|\u03b1 f wc + \u03b2 f \u2022c + |W c |\u03b2 (5)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where f cw is the number of observations containing argument w that have been assigned cut c, f cp is the number of observations containing predicate p that have been assigned cut c and f c\u2022 , f \u2022p are the marginal counts for cut c and predicate p, respectively. The two terms that are multiplied in (4) play complementary roles analogous to those of the two description lengths in Li and Abe's MDL formulation; P (c|p) will prefer to reuse more general cuts, while P (w|c) will prefer more specific cuts with a smaller associated argument vocabulary. As the number of words |W c | that can be emitted by a cut |c| varies according to the size of the subhierarchy under the cut, the proportion of probability mass accorded to the likelihood and the prior in (5) is not constant. An alternative formulation is to keep the distribution of mass between likelihood and prior constant but vary the value of the individual \u03b2 c parameters according to cut size. Experiments suggest that this alternative does not differ in performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The second cut-based model, WN-CUT-TOPICS, extends WN-CUT by adding two extra layers of latent variables. Firstly, the choice of cut is conditional on a \"topic\" variable z rather than directly conditioned on the predicate; when the topic vocabulary Z is much smaller than the cut vocabulary C, this has the effect of clustering the cuts. Secondly, Model 2 Generative story for WN-CUT-TOPICS", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "for topic z \u2208 {1 . . . |Z|} do \u03a8 z \u223c Dirichlet(\u03b1) end for for cut c \u2208 {1 . . . |C|} do \u03a6 c \u223c Dirichlet(\u03b3 c ) end for for synset s \u2208 {1 . . . |S|} do \u039e s \u223c Dirichlet(\u03b2 s ) end for for predicate p \u2208 {1 . . . |P |} do \u03b8 p \u223c Dirichlet(\u03ba) for argument instance i \u2208 Observations(p) do z i \u223c M ultinomial(\u03b8 p ) c i \u223c M ultinomial(\u03a8 z ) s i \u223c M ultinomial(\u03a6 c ) w i \u223c M ultinomial(\u039e s ) end for end for", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "instead of immediately drawing a word once a cut has been chosen, the model first draws a synset s and then draws a word from the vocabulary W s associated with that synset. This has two advantages; it directly disambiguates each observation to a specific synset rather than to a region of the hierarchy and it should also improve plausibility predictions for rare synonyms of common arguments. The generative story for WN-CUT-TOPICS is given in Algorithm 2; the distribution over arguments for p is given in (6) and the corresponding posterior estimate in (7):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (w|p) = z P (z|p) c P (c|z) s P (s|c)P (w|s) (6) \u221d z f zp + \u03ba z f \u2022p + z \u03ba z c f cz + \u03b1 f \u2022z + |C|\u03b1 \u00d7 s f sc + \u03b3 f \u2022c + |S c |\u03b3 f ws + \u03b2 f \u2022s + |W s |\u03b2", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As before, f zp , f cz , f sc and f ws are the respective co-occurrence counts of topics/predicates, cuts/topics, synsets/cuts and words/synsets in the sampling state and f \u2022p , f \u2022z , f \u2022c and f \u2022s are the corresponding marginal counts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Since WN-CUT and WN-CUT-TOPICS are constructed from multinomials with Dirichlet priors, it is relatively straightforward to train them by collapsed Gibbs sampling (Griffiths and Steyvers, 2004) , an iterative method whereby each latent variable in the model is stochastically updated according to the distribution given by conditioning on the current latent variable assignments of all other tokens. In the case of WN-CUT, this amounts to updating the cut assignment c i for each token in turn. For WN-CUT-TOPICS there are three variables to update; c i and s i must be updated simultaneously, but z i can be updated independently for the benefit of efficiency. Although WordNet contains 82,115 noun synsets, updates for c i and s i can be computed very efficiently, as there are typically few possible synsets for a given word type and few possible cuts for a given synset (the maximum synset depth is 19).", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 193, |
|
"text": "(Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The hyperparameters for the various Dirichlet priors are also reestimated in the course of learning; the values of these hyperparameters control the degree of sparsity preferred by the model. The \"top-level\" hyperparameters \u03b1 in WN-CUT and \u03ba in WN-CUT-TOPICS are estimated using a fixed-point iteration proposed by Wallach (2008) ; the other hyperparameters are learned by slice sampling (Neal, 2003) . Abney and Light (1999) proposed an approach to selectional preference learning in which arguments are generated for predicates by following a path \u03bb = (l 1 , . . . , l |\u03bb| ) from the root of the hierarchy to a leaf node and emitting the corresponding word. The path is chosen according to a Markov model with transition probabilities specific to each predicate. In this model, each leaf node is associated with a single word; the synsets associated with that word are the immediate parent nodes of the leaf. Abney and Light found that their model did not match the performance of Resnik's (1993) simpler method. We have had a similar lack of success with a Bayesian version of this model, which we do not describe further here.", |
|
"cite_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 329, |
|
"text": "Wallach (2008)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 388, |
|
"end": 400, |
|
"text": "(Neal, 2003)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 403, |
|
"end": 425, |
|
"text": "Abney and Light (1999)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Cut-based models", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Boyd corpus is associated with a distribution over topics and each topic is associated with a distribution over paths. The clustering effect of the topic layer allows the documents to \"share\" information and hence alleviate problems due to sparsity. By analogy to Abney and Light, it is a short and intuitive step to apply LDAWN to selectional preference learning. The generative story for LDAWN is given in Algorithm 3; the probability model for P (w|p) is defined by (8) and the posterior estimate is (9):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Walk-based models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (w|p) = z P (z|p) \u03bb 1[\u03bb \u2192 w]P (\u03bb|z) (8) \u221d z f zp + \u03ba z f \u2022p + z \u03ba z \u03bb 1[\u03bb \u2192 w]\u00d7 |\u03bb|\u22121 i=1 f z,l i \u2192l i+1 + \u03c3\u03b1 l i \u2192l i+1 f z,l i \u2192\u2022 + \u03c3", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Walk-based models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where 1[\u03bb \u2192 w] = 1 when the path \u03bb leads to leaf node w and has value 0 otherwise. Following Boyd-Graber et al. the Dirichlet priors on the transition probabilities are parameterised by the product of a strength parameter \u03c3 and a distribution \u03b1 s , the latter being fixed according to relative corpus frequencies to \"guide\" the model towards more fruitful paths. Gibbs sampling updates for LDAWN are given in Boyd-Graber et al. (2007) ", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 434, |
|
"text": "Boyd-Graber et al. (2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Walk-based models", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We evaluate our methods by comparing their predictions to human judgements of predicate-argument plausibility. This is a standard approach to selectional preference evaluation (Keller and Lapata, 2003; Brockmann and Lapata, 2003; \u00d3 S\u00e9aghdha, 2010) and arguably yields a better appraisal of a model's intrinsic semantic quality than other evaluations such as pseudo-disambiguation or held-out likelihood prediction. 2 We use a set of plausibility judgements collected by Keller and Lapata (2003) . This dataset comprises 180 predicateargument combinations for each of three syntactic relations: verb-object, noun-noun modification and adjective-noun modification. The data for each relation is divided into a \"seen\" portion containing 90 combinations that were observed in the British National Corpus and an \"unseen\" portion containing 90 combinations that do not appear (though the predicates and arguments do appear separately). Plausibility judgements were elicited from a large group of human subjects, then normalised and logtransformed. Table 1 gives a representative illustration of the data. Following the evaluation in\u00d3 S\u00e9aghdha (2010) , with which we wish to compare, we use Pearson r and Spearman \u03c1 correlation coefficients as performance measures. All models were trained on the 90-million word written component of the British National Corpus, 3 lemmatised, POS-tagged and parsed with the RASP toolkit (Briscoe et al., 2006) . We removed predicates occurring with just one argument type and all tokens containing non-alphabetic characters. The resulting datasets consist of 3,587,172 verbobject observations (7,954 predicate types, 80,107 argument types), 3,732,470 noun-noun observations (68,303 predicate types, 105,425 argument types) and 3,843,346 adjective-noun observations (29,975 predicate types, 62,595 argument types).", |
|
"cite_spans": [ |
|
{ |
|
"start": 176, |
|
"end": 201, |
|
"text": "(Keller and Lapata, 2003;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 229, |
|
"text": "Brockmann and Lapata, 2003;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 247, |
|
"text": "\u00d3 S\u00e9aghdha, 2010)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 415, |
|
"end": 416, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 470, |
|
"end": 494, |
|
"text": "Keller and Lapata (2003)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 1128, |
|
"end": 1143, |
|
"text": "S\u00e9aghdha (2010)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1414, |
|
"end": 1436, |
|
"text": "(Briscoe et al., 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1042, |
|
"end": 1049, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "All the Bayesian models were trained by Gibbs sampling, as outlined above. For each model we run three sampling chains for 1,000 iterations and average the plausibility predictions for each to produce a final prediction P (w|p) for each predicate-argument item. As the evaluation demands an estimate of the joint probability P (w, p) we multiply the predicted P (w|p) by a predicate probability P (p|r) estimated from relative corpus frequencies. In training we use a burn-in period of 200 iterations, after which hyperparameters are reestimated and P (p|r) predictions are sampled every 50 iterations. All probability estimates are log-transformed to match the gold standard judgements.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In order to compare against previously proposed selectional preference approaches based on Word-Net we also reimplemented the methods that performed best in the evaluation of Brockmann and Lapata (2003) : Resnik (1993) and Clark and Weir (2002) . For Resnik's model we used WordNet 2.1 rather than WordNet 3.0 as the former has multiple roots, a property that turns out to be necessary for good performance. Clark and Weir's method requires that the user specify a significance threshold \u03b1 to be used in deciding where to cut; to give it the best possible chance we tested with a range of values (0.05, 0.3, 0.6, 0.9) and report results for the best-performing setting, which consistently was \u03b1 = 0.9. One can also use different statistical hypothesis tests; again we choose the test giving the best results, which was Pearson's chi-squared test. As this method produces a probability estimate conditioned on the predicate p we multiply by a MLE estimate of P (p|r) and log-transform the result. eat food#n#1, aliment#n#1, entity#n#1, solid#n#1, food#n#2 drink fluid#n#1, liquid#n#1, entity#n#1, alcohol#n#1, beverage#n#1 appoint individual#n#1, entity#n#1, chief#n#1, being#n#2, expert#n#1 publish abstract entity#n#1, piece of writing#n#1, communication#n#2, publication#n#1 . 620 .614 .196 .222 .544 .604 .114 .125 .543 .622 .135 .102 LDA .504 .541 .558 .603 .615 .641 .636 .666 .594 .558 .468 .459 Table 3 : Results (Pearson r and Spearman \u03c1 correlations) on Keller and Lapata's (2003) plausibility data; underlining denotes the best-performing WordNet-based model, boldface denotes the overall best performance 4.2 Results Table 2 demonstrates the top cuts learned by the WN-CUT model from the verb-object training data for a selection of verbs. Table 3 gives quantitative results for the WordNet-based models under consideration, as well as results reported by\u00d3 S\u00e9aghdha (2010) for a purely distributional LDA model with 100 topics and a Maximum Likelihood Estimate model learned from the BNC. In general, the Bayesian WordNet-based models outperform the models of Resnik and Clark and Weir, and are competitive with the state-of-the-art LDA results. To test the statistical significance of performance differences we use the test proposed by Meng et al. (1992) for comparing correlated correlations, i.e., correlation scores with a shared gold standard. The differences between Bayesian WordNet models are not significant (p > 0.05, two-tailed) for any dataset or evaluation measure. However, all Bayesian models improve significantly over Resnik's and Clark and Weir's models for multiple conditions. Perhaps surprisingly, the relatively simple WN-CUT model scores the greatest number of significant improvements over both Resnik (7 out of 12 conditions) and Clark and Weir (8 out of 12) , though the other Bayesian models do follow close behind. This may suggest that the incorporation of WordNet structure into the model in itself provides much of the clustering benefit provided by an additional layer of \"topic\" latent variables. 4 In order to test the ability of the WordNet-based models to make predictions about arguments that are absent from the training vocabulary, we created an artificial out-of-vocabulary dataset by removing each of the Keller and Lapata argument words from the input corpus and retraining. An LDA selectional preference model will completely fail here, but we hope that the WordNet models can still make relatively accurate predictions by leveraging the additional lexical knowledge provided by the hierarchy. For example, if one knows that a tomatillo is classed as a vegetable in WordNet, one can predict a relatively high probability that it can be eaten, even though the word tomatillo does not appear in the BNC.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 202, |
|
"text": "Brockmann and Lapata (2003)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 205, |
|
"end": 218, |
|
"text": "Resnik (1993)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 223, |
|
"end": 244, |
|
"text": "Clark and Weir (2002)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 1279, |
|
"end": 1396, |
|
"text": "620 .614 .196 .222 .544 .604 .114 .125 .543 .622 .135 .102 LDA .504 .541 .558 .603 .615 .641 .636 .666 .594 .558 .468", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1463, |
|
"end": 1489, |
|
"text": "Keller and Lapata's (2003)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2249, |
|
"end": 2267, |
|
"text": "Meng et al. (1992)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 2767, |
|
"end": 2795, |
|
"text": "Clark and Weir (8 out of 12)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 3042, |
|
"end": 3043, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1402, |
|
"end": 1409, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1628, |
|
"end": 1635, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1751, |
|
"end": 1758, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "As a baseline we use a BNC-trained model that Keller and Lapata's (2003) plausibility data predicts P (w, p) proportional to the MLE predicate probability P (p); a distributional LDA model will make essentially the same prediction. Clark and Weir's method does not have full coverage; if no sense s of an argument appears in the data then P (s|p) is zero for all senses and the resulting prediction is zero, which cannot be log-transformed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 72, |
|
"text": "Keller and Lapata's (2003)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "To sidestep this issue, unseen senses are assigned a pseudofrequency of 0.1. Results for this \"forced-OOV\" task are presented in Table 4 . WN-CUT proves the most adept at generalising to unseen arguments, attaining the best performance on 7 of 12 dataset/evaluation conditions and a statistically significant improvement over the baseline on 6. We observe that estimating the plausibility of unseen arguments for noun-noun modifiers is particularly difficult. One obvious explanation is that the training data for this relation has fewer tokens per predicate, making it more difficult to learn their preferences. A second, more hypothetical, explanation is that the ontological structure of WordNet is a relatively poor fit for the preferences of nominal modifiers; it is well-known that almost any pair of nouns can combine to produce a minimally plausible nounnoun compound (Downing, 1977) and it may be that this behaviour is ill-suited by the assumption that preferences are sparse distributions over regions of WordNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 876, |
|
"end": 891, |
|
"text": "(Downing, 1977)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 129, |
|
"end": 136, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental procedure", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "In this paper we have presented a range of Bayesian selectional preference models that incorporate knowledge about the structure of a lexical hi-erarchy. One motivation for this work was to test the hypothesis that such knowledge can be helpful in constructing robust models that can handle rare and unseen arguments. To this end we have reported a plausibility-based evaluation in which our models outperform previously proposed WordNetbased preference models and make sensible predictions for out-of-vocabulary items. A second motivation, which we intend to explore in future work, is to apply our models in the context of a word sense disambiguation task. Previous studies have demonstrated the effectiveness of distributional Bayesian selectional preference models for predicting lexical substitutes (\u00d3 S\u00e9aghdha and Korhonen, 2011) but these models lack a principled way to map a word onto its most likely WordNet sense. The methods presented in this paper offer a promising solution to this issue. Another potential research direction is integration of semantic relation extraction algorithms with WordNet or other lexical resources, along the lines of Pennacchiotti and Pantel (2006) and Van Durme et al. (2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1158, |
|
"end": 1175, |
|
"text": "Pennacchiotti and", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1176, |
|
"end": 1193, |
|
"text": "Pantel (2006) and", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1194, |
|
"end": 1217, |
|
"text": "Van Durme et al. (2009)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we use WordNet version 3.0, except where stated otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For a related argument in the context of topic model evaluation, seeChang et al. (2009).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.natcorp.ox.ac.uk/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An alternative hypothesis is that samplers for the more complex models take longer to \"mix\". We have run some experiments with 5,000 iterations but did not observe an improvement in performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work in this paper was funded by the EP-SRC (UK) grant EP/G051070/1, EU grant 7FP-ITC-248064 and the Royal Society, (UK).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Hiding a semantic hierarchy in a Markov model", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marc", |
|
"middle": [], |
|
"last": "Light", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the ACL-99 Workshop on Unsupervised Learning in NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Abney and Marc Light. 1999. Hiding a semantic hierarchy in a Markov model. In Proceedings of the ACL-99 Workshop on Unsupervised Learning in NLP, College Park, MD.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Discriminative learning of selectional preferences from unlabeled text", |
|
"authors": [ |
|
{ |
|
"first": "Shane", |
|
"middle": [], |
|
"last": "Bergsma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Randy", |
|
"middle": [], |
|
"last": "Goebel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of EMNLP-08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preferences from unlabeled text. In Proceedings of EMNLP-08, Honolulu, HI.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hierarchical topic models and the nested Chinese Restaurant Process", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of NIPS-03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. 2003a. Hierarchical topic models and the nested Chinese Restaurant Process. In Proceedings of NIPS-03, Vancouver, BC.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Latent Dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003b. Latent Dirichlet allocation. Journal of Ma- chine Learning Research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A topic model for word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaojin", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of EMNLP-CoNLL-07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jordan Boyd-Graber, David Blei, and Xiaojin Zhu. 2007. A topic model for word sense disambiguation. In Pro- ceedings of EMNLP-CoNLL-07, Prague, Czech Re- public.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The second release of the RASP system", |
|
"authors": [ |
|
{ |
|
"first": "Ted", |
|
"middle": [], |
|
"last": "Briscoe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rebecca", |
|
"middle": [], |
|
"last": "Watson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the ACL-06 Interactive Presentation Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceed- ings of the ACL-06 Interactive Presentation Sessions, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Evaluating and combining approaches to selectional preference acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Carsten", |
|
"middle": [], |
|
"last": "Brockmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of EACL-03", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carsten Brockmann and Mirella Lapata. 2003. Evalu- ating and combining approaches to selectional pref- erence acquisition. In Proceedings of EACL-03, Bu- dapest, Hungary.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Reading tea leaves: How humans interpret topic models", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Gerrish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of NIPS-09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Chang, Jordan Boyd-Graber, Sean Gerrish, Chong Wang, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Pro- ceedings of NIPS-09, Vancouver, BC.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Class-based probability estimation using a semantic hierarchy", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weir", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Computational Linguistics", |
|
"volume": "28", |
|
"issue": "2", |
|
"pages": "187--206", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and David Weir. 2002. Class-based prob- ability estimation using a semantic hierarchy. Compu- tational Linguistics, 28(2), 187-206.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "On the creation and use of English compound nouns", |
|
"authors": [ |
|
{ |
|
"first": "Pamela", |
|
"middle": [], |
|
"last": "Downing", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1977, |
|
"venue": "Language", |
|
"volume": "53", |
|
"issue": "4", |
|
"pages": "810--842", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pamela Downing. 1977. On the creation and use of En- glish compound nouns. Language, 53(4):810-842.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "A flexible, corpus-driven model of regular and inverse selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Katrin", |
|
"middle": [], |
|
"last": "Erk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ulrike", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Computational Linguistics", |
|
"volume": "36", |
|
"issue": "4", |
|
"pages": "723--763", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Katrin Erk, Sebastian Pad\u00f3, and Ulrike Pad\u00f3. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4):723-763.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "WordNet: An Electronic Lexical Database", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Finding scientific topics", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the National Academy of Sciences", |
|
"volume": "101", |
|
"issue": "", |
|
"pages": "5228--5235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences, 101(suppl. 1):5228-5235.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Using the Web to obtain frequencies for unseen bigrams", |
|
"authors": [ |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "3", |
|
"pages": "459--484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frank Keller and Mirella Lapata. 2003. Using the Web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459-484.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Generalizing case frames using a thesaurus and the MDL principle", |
|
"authors": [ |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoki", |
|
"middle": [], |
|
"last": "Abe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "2", |
|
"pages": "217--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computa- tional Linguistics, 24(2):217-244.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Disambiguating nouns, verbs and adjectives using automatically acquired selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Diana", |
|
"middle": [], |
|
"last": "Mccarthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Carroll", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "4", |
|
"pages": "639--654", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diana McCarthy and John Carroll. 2003. Disambiguat- ing nouns, verbs and adjectives using automatically acquired selectional preferences. Computational Lin- guistics, 29(4):639-654.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Comparing correlated correlation coefficients", |
|
"authors": [ |
|
{ |
|
"first": "Xiao-Li", |
|
"middle": [], |
|
"last": "Meng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Rubin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Psychological Bulletin", |
|
"volume": "111", |
|
"issue": "1", |
|
"pages": "172--175", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xiao-Li Meng, Robert Rosenthal, and Donald B. Rubin. 1992. Comparing correlated correlation coefficients. Psychological Bulletin, 111(1):172-175.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Mixtures of hierarchical topics with Pachinko allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mimno", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of ICML-07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Mimno, Wei Li, and Andrew McCallum. 2007. Mixtures of hierarchical topics with Pachinko alloca- tion. In Proceedings of ICML-07, Corvallis, OR.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Slice sampling", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Neal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Annals of Statistics", |
|
"volume": "31", |
|
"issue": "3", |
|
"pages": "705--767", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Radford M. Neal. 2003. Slice sampling. Annals of Statistics, 31(3):705-767.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Probabilistic models of similarity in syntactic context", |
|
"authors": [ |
|
{ |
|
"first": "Diarmuid\u00f3", |
|
"middle": [], |
|
"last": "S\u00e9aghdha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP-11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Anna Korhonen. 2011. Prob- abilistic models of similarity in syntactic context. In Proceedings of EMNLP-11, Edinburgh, UK.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Latent variable models of selectional preference", |
|
"authors": [ |
|
{ |
|
"first": "Diarmuid\u00f3", |
|
"middle": [], |
|
"last": "S\u00e9aghdha", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2010. Latent variable models of selectional preference. In Proceedings of ACL-10, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Ontologizing semantic relations", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Pennacchiotti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of COLING-ACL-06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Pennacchiotti and Patrick Pantel. 2006. Ontolo- gizing semantic relations. In Proceedings of COLING- ACL-06, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "The effect of plausibility on eye movements in reading", |
|
"authors": [ |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tessa", |
|
"middle": [], |
|
"last": "Warren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Juhasz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Liversedge", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Journal of Experimental Psychology: Learning Memory and Cognition", |
|
"volume": "30", |
|
"issue": "6", |
|
"pages": "1290--1301", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Keith Rayner, Tessa Warren, Barbara J. Juhasz, and Si- mon P. Liversedge. 2004. The effect of plausibil- ity on eye movements in reading. Journal of Experi- mental Psychology: Learning Memory and Cognition, 30(6):1290-1301.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Crosscutting models of lexical semantics", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Reisinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP-11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Reisinger and Raymond Mooney. 2011. Cross- cutting models of lexical semantics. In Proceedings of EMNLP-11, Edinburgh, UK.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Selection and Information: A Class-Based Approach to Lexical Relationships", |
|
"authors": [ |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Reisinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marius", |
|
"middle": [], |
|
"last": "Pa\u015fca", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Proceedings of ACL-IJCNLP-09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joseph Reisinger and Marius Pa\u015fca. 2009. Latent vari- able models of concept-attribute attachment. In Pro- ceedings of ACL-IJCNLP-09, Suntec, Singapore. Philip Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. thesis, University of Pennsylvania.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "A latent Dirichlet allocation method for selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mausam", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [ |
|
"Etzioni" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings ACL-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ritter, Mausam, and Oren Etzioni. 2010. A la- tent Dirichlet allocation method for selectional prefer- ences. In Proceedings ACL-10, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Combining EM training and the MDL principle for an automatic verb classification incorporating selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Schulte Im Walde", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Hying", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Scheible", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Helmut", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08:HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabine Schulte im Walde, Christian Hying, Christian Scheible, and Helmut Schmid. 2008. Combining EM training and the MDL principle for an automatic verb classification incorporating selectional preferences. In Proceedings of ACL-08:HLT, Columbus, OH.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Automatic metaphor interpretation as a paraphrasing task", |
|
"authors": [ |
|
{ |
|
"first": "Ekaterina", |
|
"middle": [], |
|
"last": "Shutova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of NAACL-HLT-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ekaterina Shutova. 2010. Automatic metaphor inter- pretation as a paraphrasing task. In Proceedings of NAACL-HLT-10, Los Angeles, CA.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Contextualizing semantic representations using syntactically enriched vector models", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Thater", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hagen", |
|
"middle": [], |
|
"last": "F\u00fcrstenau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Pinkal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ACL-10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Thater, Hagen F\u00fcrstenau, and Manfred Pinkal. 2010. Contextualizing semantic representations using syntactically enriched vector models. In Proceedings of ACL-10, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Deriving generalized knowledge from corpora using WordNet abstraction", |
|
"authors": [ |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Benjamin Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lenhart", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Michalak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Schubert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of EACL-09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Benjamin Van Durme, Philip Michalak, and Lenhart K. Schubert. 2009. Deriving generalized knowledge from corpora using WordNet abstraction. In Proceed- ings of EACL-09, Athens, Greece.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Structured Topic Models for Language", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hanna Wallach. 2008. Structured Topic Models for Lan- guage. Ph.D. thesis, University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Making preferences more active", |
|
"authors": [ |
|
{ |
|
"first": "Yorick", |
|
"middle": [], |
|
"last": "Wilks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1978, |
|
"venue": "Artificial Intelligence", |
|
"volume": "11", |
|
"issue": "", |
|
"pages": "197--225", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yorick Wilks. 1978. Making preferences more active. Artificial Intelligence, 11:197-225.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Structured relation discovery using generative models", |
|
"authors": [ |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aria", |
|
"middle": [], |
|
"last": "Haghighi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of EMNLP-11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of EMNLP-11, Ed- inburgh, UK.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Generalizing over lexical features: Selectional preferences for semantic role classification", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Be\u00f1at Zapirain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of ACL-IJCNLP-09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Be\u00f1at Zapirain, Eneko Agirre, and Llu\u00eds M\u00e0rquez. 2009. Generalizing over lexical features: Selectional prefer- ences for semantic role classification. In Proceedings of ACL-IJCNLP-09, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Exploiting web-derived selectional preference to improve statistical dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Guangyou", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of ACL-11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting web-derived selectional preference to im- prove statistical dependency parsing. In Proceedings of ACL-11, Portland, OR.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"content": "<table><tr><td>Model 3 \u223c</td></tr><tr><td>Dirichlet(\u03c3\u03b1 s )</td></tr><tr><td>end for</td></tr><tr><td>end for</td></tr><tr><td>for predicate p \u2208 {1 . . . |P |} do</td></tr><tr><td>\u03b8 p \u223c Dirichlet(\u03ba)</td></tr><tr><td>for argument instance i \u2208 Observations(p)</td></tr><tr><td>do</td></tr><tr><td>z i \u223c M ultinomial(\u03b8 p )</td></tr><tr><td>Create a path starting at the root synset \u03bb 0 :</td></tr><tr><td>while not at a leaf node do</td></tr><tr><td>\u03bb t+1 \u223c M ultinomial(\u03a8 z i ,\u03bbt )</td></tr><tr><td>end while</td></tr><tr><td>Emit the word at the leaf as w i</td></tr><tr><td>end for</td></tr><tr><td>end for</td></tr><tr><td>-Graber et al. (2007) describe a related topic</td></tr><tr><td>model, LDAWN, for word sense disambiguation</td></tr><tr><td>that adds an intermediate layer of latent variables</td></tr><tr><td>Z on which the Markov model parameters are con-</td></tr><tr><td>ditioned. In their application, each document in a</td></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Generative story for LDAWN for topic z \u2208 {1 . . . |Z|} do for synset s \u2208 {1 . . . |S|} do Draw transition probabilities \u03a8 z,s" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td colspan=\"3\">Verb-object</td><td/><td colspan=\"3\">Noun-noun</td><td/><td colspan=\"3\">Adjective-noun</td><td/></tr><tr><td>Seen</td><td/><td colspan=\"2\">Unseen</td><td>Seen</td><td/><td colspan=\"2\">Unseen</td><td>Seen</td><td/><td colspan=\"2\">Unseen</td></tr><tr><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td><td>r</td><td>\u03c1</td></tr><tr><td>WN-</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Most probable cuts learned by WN-CUT for the object argument of selected verbs" |
|
}, |
|
"TABREF5": { |
|
"content": "<table/>", |
|
"html": null, |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Forced-OOV results (Pearson r and Spearman \u03c1 correlations) on" |
|
} |
|
} |
|
} |
|
} |