|
{ |
|
"paper_id": "C14-1017", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:24:03.565654Z" |
|
}, |
|
"title": "Learning Task-specific Bilexical Embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Swaroop", |
|
"middle": [], |
|
"last": "Pranava", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Campus Nord UPC", |
|
"location": { |
|
"settlement": "Barcelona" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Xavier", |
|
"middle": [], |
|
"last": "Madhyastha", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Campus Nord UPC", |
|
"location": { |
|
"settlement": "Barcelona" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ariadna", |
|
"middle": [], |
|
"last": "Carreras", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Campus Nord UPC", |
|
"location": { |
|
"settlement": "Barcelona" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Quattoni", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya Campus Nord UPC", |
|
"location": { |
|
"settlement": "Barcelona" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a method that learns bilexical operators over distributional representations of words and leverages supervised data for a linguistic relation. The learning algorithm exploits lowrank bilinear forms and induces low-dimensional embeddings of the lexical space tailored for the target linguistic relation. An advantage of imposing low-rank constraints is that prediction is expressed as the inner-product between low-dimensional embeddings, which can have great computational benefits. In experiments with multiple linguistic bilexical relations we show that our method effectively learns using embeddings of a few dimensions.", |
|
"pdf_parse": { |
|
"paper_id": "C14-1017", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a method that learns bilexical operators over distributional representations of words and leverages supervised data for a linguistic relation. The learning algorithm exploits lowrank bilinear forms and induces low-dimensional embeddings of the lexical space tailored for the target linguistic relation. An advantage of imposing low-rank constraints is that prediction is expressed as the inner-product between low-dimensional embeddings, which can have great computational benefits. In experiments with multiple linguistic bilexical relations we show that our method effectively learns using embeddings of a few dimensions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "We address the task of learning functions that compute compatibility scores between pairs of lexical items under some linguistic relation. We refer to these functions as bilexical operators. As an instance of this problem, consider learning a model that predicts the probability that an adjective modifies a noun in a sentence. In this case, we would like the bilexical operator to capture the fact that some adjectives are more compatible with some nouns than others. For example, a bilexical operator should predict that the adjective electronic has high probability of modifying the noun device but little probability of modifying the noun case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Bilexical operators can be useful for multiple NLP applications. For example, they can be used to reduce ambiguity in a parsing task. Consider the following sentence extracted from a weblog: Vynil can be applied to electronic devices and cases, wooden doors and furniture and walls. If we want to predict the dependency structure of this sentence we need to make several decisions. In particular, the parser would need to decide (1) Does electronic modify devices? (2) Does electronic modify cases? (3) Does wooden modify doors? (4) Does wooden modify furniture? Now imagine that in the corpus used to train the parser none of these nouns have been observed, then it is unlikely that these attachments can be resolved correctly. However, if an accurate noun-adjective bilexical operator were available most of the uncertainty could be resolved. This is because a good bilinear operator would give high probability to the pairs electronic-device, wooden-door, wooden-furniture and low probability to the pair electronic-case.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The simplest way of inducing a bilexical operator is to learn it from a training corpus. That is, assuming that we are given some data annotated with a linguistic relation between a modifier and a head (e.g. adjective and noun) we can simply build a maximum likelihood estimator for Pr(m | h) by counting the occurrences of modifiers and heads under the target relation. For example, we could consider learning bilexical operators from sentences annotated with dependency structures. Clearly, this model can not generalize to head words not present in the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To mitigate this we could consider bilexical operators that can exploit lexical embeddings, such as a distributional vector-space representation of words. In this case, we assume that for every word we can compute an n-dimensional vector space representation \u03c6(w) \u2192 R n . This representation typically captures distributional features of the context in which the lexical item can occur. The key point is that we do not need a supervised corpus to compute the representation. All we need is a large textual corpus to compute the relevant statistics. Once we have the representation we can exploit operations in the induced vector space to define lexical compatibility operators. For example we could define a bilexical operator as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Pr(m | h) = exp { \u03c6(m), \u03c6(h) } m exp { \u03c6(m ), \u03c6(h) } (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where \u03c6(x), \u03c6(y) denotes the inner-product. Alternatively, given an initial high-dimensional distributional representation computed from a large textual corpus we could first induce a projection to a lower k dimensional space by performing truncated singular value decomposition. The idea is that the lower dimensional representation will be more efficient and it will better capture the relevant dimensions of the distributional representation. The bilexical operator would then take the form of:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(m|h) = exp { U \u03c6(m), U \u03c6(h) } m exp { U \u03c6(m ), U \u03c6(h) }", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "where U \u2208 R k\u00d7n is the projection matrix obtained via SVD. The advantage of this approach is that as long as we can estimate the distribution of contexts of words we can compute the value of the bilexical operator. However, this approach has a clear limitation: to design a bilinear operator for a target linguistic relation we must design the appropriate distributional representation. Moreover, there is no clear way of exploiting a supervised training corpus. In this paper we combine both the supervised and distributional approaches and present a learning algorithm for inducing bilexical operators from a combination of supervised and unsupervised training data. The main idea is to define bilexical operators using bilinear forms over distributional representations: \u03c6(x) W \u03c6(y), where W \u2208 R n\u00d7n is a matrix of parameters. We can then train our model on the supervised training corpus via conditional maximum-likelihood estimation. To induce a low-dimensional representation, we first observe that the implicit dimensionality of the bilinear form is given by the rank of W . In practice controlling the rank of W can result in important computational savings in cases where one evaluates a target word x against a large number of candidate words y: this is because we can project the representations \u03c6(x) and \u03c6(y) down to the low-dimensional space where evaluating the function is simply an inner-product. This setting is in fact usual, for example for lexical retrieval applications (e.g. given a noun, sort all adjectives in the vocabulary according to their compatibility), or for parsing (where one typically evaluates the compatibility between all pairs of words in a sentence).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Consequently with these ideas, we propose to regularize the maximum-likelihood estimation using a nuclear norm regularizer that serves as a convex relaxation to the rank function. To minimize the regularized objective we make use of an efficient iterative proximal method that involves computing the gradient of the function and performing singular value decompositions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We test the proposed algorithm on several linguistic relations and show that it can predict modifiers for unknown words more accurately than the unsupervised approach. Furthermore, we compare different types of regularizers for the bilexical operator W , and observe that indeed the low-rank regularizer results in the most efficient technique at prediction time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In summary, the main contributions of this paper are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We propose a supervised framework for learning bilexical operators over distributional representations, based on learning bilinear forms W .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 We show that we can obtain low-dimensional compressions of the distributional representation by imposing low-rank constraints to the bilinear form. Combined with supervision, this results in lexical embeddings tailored for a specific bilexical task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 In experiments, we show that our models generalize well to unseen word pairs, using only a few dimensions, and outperforming standard unsupervised distributional approaches. We also present an application to prepositional phrase attachment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "2 Bilinear Models for Bilexical Predictions", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Let V be a vocabulary, and let x \u2208 V denote a word. Let H \u2286 V be a set of head words, and M \u2286 V be a set of modifier words. In the noun-adjective relation example, H is the set of nouns and M is the set of adjectives. The task is as follows. We are given a training set of l tuples", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "D = {(m, h) 1 , . . . , (m, h) l },", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "where m \u2208 M and h \u2208 H and we want to learn a model of the conditional distribution Pr(m | h). We want this model to perform well on all head-modifier pairs. In particular we will test the performance of the model on heads that do not appear in D.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We assume that we are given access to a distributional representation function \u03c6 : V \u2192 R n , where \u03c6(x) is the n-dimensional representation of x. Typically, this function is computed from an unsupervised corpus. We use \u03c6(x) [i] to refer to the i-th coordinate of the vector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definitions", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Our model makes use of the bilinear form W : R n \u00d7 R n \u2192 R, where W \u2208 R n\u00d7n , and evaluates as \u03c6(m) W \u03c6(h). We define the bilexical operator as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(m | h) = exp \u03c6(m) W \u03c6(h) m \u2208M exp {\u03c6(m ) W \u03c6(h)}", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Note that the above model is nothing more than a conditional log-linear model defined over", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "n 2 fea- tures f i,j (m, h) = \u03c6(m) [i] \u03c6(h) [j]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "(this can be seen clearly when we write the bilinear form as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "n i=1 n j=1 f i,j (m, h)W i,j .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The reason why it is useful to regard W as a matrix will become evident in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Before moving to the next section, let us note that the unsupervised SVD model in Eq. 2is also a bilinear model as defined here. This can be seen if we set W = U U , which is a bilinear form of rank k. The key difference is in the way W is learned using supervision.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "3 Learning Low-rank Bilexical Operators", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bilinear Model", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Given a training set D and a feature function \u03c6(x) we can do standard conditional max-likelihood optimization and minimize the negative of the log-likelihood function, log Pr(D):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(m,h)\u2208D \u03c6(m) W \u03c6(h) \u2212 log m \u2208M exp \u03c6(m ) W \u03c6(h)", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We would like to control the complexity of the learned model by including some regularization penalty. Moreover, like in the low-dimensional unsupervised approach we want our model to induce a lowdimensional representation of the lexical space. The first observation is that the bilinear form computes a weighted inner product in some space. Consider the singular value decomposition: W = U \u03a3V . We can write the bilinear form as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "[\u03c6(m) U ] \u03a3 [V \u03c6(h)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", thus we can regardm = \u03c6(m) U as a projection of m andh = V \u03c6(h) as a projection of h. Then the bilinear form can be written as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "n i=1 \u03a3 [i,i]m[i]h[i]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". The rank of W defines the dimensionality of the induced space. It is easy to see that if W has rank k it can be factorized as U \u03a3V where U \u2208 R n\u00d7k and V \u2208 R k\u00d7n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since the rank of W determines the dimensionality of the induced space, it would be reasonable to add a rank minimization penalty in the objective in (4). Unfortunately this would lead to a non-convex regularized objective. Instead, we propose to use as a regularizer a convex relaxation of the rank function, the nuclear norm W * (the 1 norm of the singular values of W ). Putting it all together, our learning algorithm minimizes:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "(m,h)\u2208D \u2212 log Pr(m | h)) + \u03bb W *", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Here \u03bb is a constant that controls the trade-off between fitting the data and the complexity of the model. This objective is clearly convex since both the objective and the regularizer are convex. To minimize it we use the a proximal gradient algorithm which is described next.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Low-rank Optimization", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We now describe the learning algorithm that we use to induce the bilexical operators from training data. We are interested in minimizing the objective (5), or in fact a more general version where we can replace the regularizer W * by standard 1 or 2 penalties. For any convex regularizer r(W ) (namely 1 , 2 or the nuclear norm) the objective in (5) is convex. Our learning algorithm is based on a simple optimization scheme known as forward-backward splitting (FOBOS) (Duchi and Singer, 2009) . This algorithm has convergence rates in the order of 1/ 2 , which we found sufficiently fast for our application. Many other optimization approaches are possible, for example one could express the regularizer as a convex constraint and utilize a projected gradient method which has a similar convergence rate. Proximal methods are slightly more simple to implement and we chose the proximal approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 493, |
|
"text": "(Duchi and Singer, 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The FOBOS algorithm works as follows. In a series of iterations t = 1 . . . T compute parameter matrices W t as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. Compute the gradient of the negative log-likelihood, and update the parameters", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W t+0.5 = W t \u2212 \u03b7 t g(W t ) where \u03b7 t = c \u221a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "t is a step size and g(W t ) is the gradient of the loss at W t .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. Update W t+0.5 to take into account the regularization penalty r(W ), by solving", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W t+1 = argmin W ||W t+0.5 \u2212 W || 2 2 + \u03b7 t \u03bbr(W )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the regularizers we consider, this step is solved using the proximal operator associated with the regularizer. Specifically:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 For 1 it is a simple thresholding:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W t+1 (i, j) = sign(W t+0.5 (i, j)) \u2022 max(W t+0.5 (i, j) \u2212 \u03b7 t \u03bb, 0)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 For 2 it is a simple scaling:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W t+1 = 1 1 + \u03b7 t \u03bb W t+0.5", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 For nuclear-norm, perform SVD thresholding. Compute the SVD to write W t+0.5 = U SV with S a diagonal matrix and U, V orthogonal matrices. Denote by \u03c3 i the i-th element on the diagonal of S. Define a new matrixS with diagonal elements\u03c3 i = max(\u03c3 i \u2212 \u03b7 t \u03bb, 0). Then set", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "W t+1 = USV", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Optimizing a bilinear model using nuclear-norm regularization involves the extra cost of performing SVD of W at each iteration. In our experiments the dimension of W was 2, 000 \u00d7 2, 000 and computing SVD was fast, much faster than computing the gradient, which dominates the cost of the algorithm. The optimization parameters of the method are the regularization constant \u03bb, the step size constant c and the number of iterations T . In our experiments we ran a range of \u03bb and c values for 200 iterations, and used a validation set to pick the best configuration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Proximal Algorithm for Bilexical Operators", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Research in learning representations for natural language processing can be broadly classified into two different paradigms based on the learning setting: unsupervised representation learning and semisupervised representation learning. Unsupervised representation learning does not require any supervised training data, while semi-supervised representation learning requires the presence of supervised training data with the potential advantage that it can adapt the representation to the task at hand.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Unsupervised approaches to learning representations mainly involve representations that are learned not for a specific task, rather a variety of tasks. These representations rely more on the property of abstractness and generalization. Further, unsupervised approaches can be roughly categorized into (a) clustering-based approaches that make use of clusters induced using a notion of distributed similarity, such as the method by Brown et al. (1992) ; (b) neural-network-based representations that focus on learning multilayer neural network in a way to extract features from the data (Morin and Bengio, 2005; Mnih and Hinton, 2007; Bengio and S\u00e9n\u00e9cal, 2008; Mnih and Hinton, 2009) ; (c) pure distributional approaches that principally follow the distributional assumption that the words which share a set of contexts are similar (Sahlgren, 2006; Turney and Pantel, 2010; Dumais et al., 1988; Landauer et al., 1998; Lund et al., 1995; V\u00e4yrynen et al., 2007) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 450, |
|
"text": "Brown et al. (1992)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 586, |
|
"end": 610, |
|
"text": "(Morin and Bengio, 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 611, |
|
"end": 633, |
|
"text": "Mnih and Hinton, 2007;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 634, |
|
"end": 659, |
|
"text": "Bengio and S\u00e9n\u00e9cal, 2008;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 660, |
|
"end": 682, |
|
"text": "Mnih and Hinton, 2009)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 831, |
|
"end": 847, |
|
"text": "(Sahlgren, 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 848, |
|
"end": 872, |
|
"text": "Turney and Pantel, 2010;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 873, |
|
"end": 893, |
|
"text": "Dumais et al., 1988;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 894, |
|
"end": 916, |
|
"text": "Landauer et al., 1998;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 917, |
|
"end": 935, |
|
"text": "Lund et al., 1995;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 936, |
|
"end": 958, |
|
"text": "V\u00e4yrynen et al., 2007)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We also induce lexical embeddings, but in our case we employ supervision. That is, we follow a semi-supervised paradigm for learning representations. Semi-supervised approaches initially learn representations typically in an unsupervised setting and then induce a representation that is jointly learned for the task with a labeled corpus. A high-dimensional representation is extracted from unlabeled data, while the supervised step compresses the representation to be low-dimensional in a way that favors the the task at hand. Collobert and Weston (2008) present a neural network language model, where given a sentence, it performs a set of language processing tasks (from part of speech tagging, chunking, extracting named entity, extracting semantic roles and decisions on the correctness of the sentence) by using the learned representations. The representation itself is extracted from unlabeled corpora, while all the other tasks are jointly trained on labeled corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 555, |
|
"text": "Collobert and Weston (2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Socher et al. (2011) present a model based on recursive neural networks that learns vector space representations for words, multi-word phrases and sentences. Given a sentence with its syntactic structure, their model assings vector representations to each of the lexical tokens of the sentence, and then traverses the syntactic tree bottom-up, such that at each node a vector representation of the corresponding phrase is obtained by composing the vectors associated with the children.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Bai et al. (2010) use a technique similar to ours, using bilinear forms with low-rank constraints. In their case, they explicitly look for a low-rank factorization of the matrix, making their optimization non-convex. As far as we know, ours is the first convex formulation, where we employ a relaxation of the rank (i.e. the nuclear norm) to make the objective convex. They apply the method to document ranking, and thus optimize a max-margin ranking loss. In our application to bilexical models, we perform conditional max-likelihood estimation. Hutchinson et al. (2013) propose an explicitly sparse and lowrank maximum-entropy language model. The sparse plus low rank setting is learned in such a way that the low rank component learns the regularities in the training data and the sparse component learns the exceptions like multiword expressions etc. Chechik et al. (2010) also learned bilinear operators using max-margin techniques, with pairwise similarity as supervision, but they did not consider low-rank constraints.", |
|
"cite_spans": [ |
|
{ |
|
"start": 547, |
|
"end": 571, |
|
"text": "Hutchinson et al. (2013)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 855, |
|
"end": 876, |
|
"text": "Chechik et al. (2010)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "One related area where bilinear operators are used to induce embeddings is distance metric learning. Weinberger and Saul (2009) used large-margin nearest neighbor methods to learn a non-sparse embedding, but these are computationally intensive and might not be suitable for large-scale tasks in NLP.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We conducted a set of experiments to test the ability of our algorithm to learn bilexical operators for several linguistic relations. As supervised training data we use the gold standard dependencies of the WSJ training section of the Penn Treebank (Marcus et al., 1993) . We consider the following relations: Figure 1: Pairwise accuracy with respect to the number of double operations required to compute the distribution over modifiers for a head word. Plots for noun-adjective and verb-object relations, in both directions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 270, |
|
"text": "(Marcus et al., 1993)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Noun-Adjective: we model the distribution of adjectives given a noun; and a separate distribution of nouns given an adjective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Verb-Object: we model the distribution of object nouns given a verb; and a separate distribution of verbs given an object.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Prepositions: in this case we consider bilexical operators associated with a preposition, which model the probability of a head noun or verb above the preposition given the noun below the preposition. We present results for prepositional relations given by \"with\", \"for\", \"in\" and \"on\".", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The distributional representation \u03c6(x) was computed using the BLLIP corpus (Charniak et al., 2000) . We compute a bag-of-words representation for the context of each lexical item, that is \u03c6(w) [i] corresponds to the frequency of word i appearning in the context of w. We use a context window of size 10 and restrict our bag-of-words vocabulary to contain only the 2,000 most frequent words present in the corpus. Vectors were normalized. Figure 2: Pairwise accuracy with respect to the number of double operations required to compute the distribution over modifiers for a head word. Plots for four prepositional relations: with, for, in, on. The distributions are of verbs and objects above the preposition given the noun below the preposition.", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 98, |
|
"text": "(Charniak et al., 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 196, |
|
"text": "[i]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To test the performance of our algorithm for each relation we partition the set of heads into a training and a test set, 60% of the heads are use for training, 10% of the heads are used for validation and 30% of the heads are used for testing. Then, we consider all observed modifiers in the data to form a vocabulary of modifier words. The goal of this task is to learn conditional distribution over all these modifers given a head word without context. In our experiments, the number of modifiers per relation ranges from 2,500 to 7,500 words. For each head word, we create a list of compatible modifiers from the annotated data, by taking all modifiers that occur at least once with the head. Hence, for each head the set of all modifiers is partitioned into compatible and non-compatible. For testing, we measure a pairwise accuracy, the percentage of compatible/non-compatible pairs of modifiers where the former obtains higher probability. Let us stress that none of the test head words has been observed in training, while the list of modifiers is the same for training, validation and testing.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We compare the performance of the bilexical model trained with nuclear norm regularization (NN) with other regularization penalties (L1 and L2). We also compare these supervised methods with an Noun Predicted Adjectives president executive, senior, chief, frank, former, international, marketing, assistant, annual, financial wife former, executive, new, financial, own, senior, old, other, deputy, major shares annual, due, net, convertible, average, new, high-yield, initial, tax-exempt, subordinated mortgages annualized, annual, three-month, one-year, average, six-month, conventional, short-term, higher, lower month last, next, fiscal, first, past, latest, early, previous, new, current problem new, good, major, tough, bad, big, first, financial, long, federal holiday new, major, special, fourth-quarter, joint, quarterly, third-quarter, small, strong, own unsupervised model: a low-dimensional SVD model as in Eq. 2, which corresponds to an inner product as in Eq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "(1) when all dimensions are considered.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To report performance, we measure pairwise accuracy with respect to the capacity of the model in terms of number of active parameters. To measure the capacity of a model we consider the number of double operations that are needed to compute, given a head, the scores for all modifiers in the vocabulary (we exclude the exponentiations and normalization needed to compute the distribution of modifiers given a head, since this is a constant cost for all the models we compare, and is not needed if we only want to rank modifiers). Recall that the dimension of \u03c6(x) is n, and assume that there are m total modifiers in the vocabulary. In our experiments n = 2, 000 and m ranges from 2, 500 to 7, 500. The correspondances with operations are:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Assume that the L1 and L2 models have k non-zero weights in W . Then the number of operations to compute a distribution is km.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "\u2022 Assume that the NN and the unsupervised models have rank k. We assume that the modifier vectors are alredy projected down to k dimensions. For a new head, one needs to project it and perform m inner products, hence the number of operations is kn + km. Figure 1 shows the performance of models for noun-adjective and verb-object relations, while Figure 2 shows plots for prepositional relations. 1 The first observation is that supervised approaches outperform the unsupervised approach. In cases such as noun-adjetive relations the unsupervised approach performs close to the supervised approaches, suggesting that the pure distributional approach can sometimes work. But in most relations the improvement obtained by using supervision is very large. When comparing the type of regularizer, we see that if the capacity of the model is unrestricted (right part of the curves), all models tend to perform similarly. However, when restricting the size, the nuclear-norm model performs much better. Roughly, 20 hidden dimensions are enough to obtain the most accurate performances (which result in \u223c 140, 000 operations for initial representaions of 2, 000 dimensions and 5, 000 modifier candidates). As an example of the type of predictions, Table 1 shows the most likely adjectives for some test nouns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 397, |
|
"end": 398, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 262, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1241, |
|
"end": 1248, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on Syntactic Relations", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We now switch to a standard classification task, prepositional phrase attachment, that we frame as a bilexical prediction task. We start from the formulation of the task as a binary classification problem by Ratnaparkhi et al. (1994) p and noun n, decide if the prepositional phrase p-n attaches to v (y = V) or to o (y = O). For example, in meet,demand,for,products the correct attachment is O. Ratnaparkhi et al. (1994) define a linear maximum likelihood model of the form Pr(y | x) = exp{ w, f (x, y) } * Z(x) \u22121 , where f (x, y) is a vector of d features, w is a parameter vector in R d , and Z(x) is the normalizer summing over y = {V, O}. Here we define a bilexical model of the form that uses a distributional representation \u03c6:", |
|
"cite_spans": [ |
|
{ |
|
"start": 208, |
|
"end": 233, |
|
"text": "Ratnaparkhi et al. (1994)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 396, |
|
"end": 421, |
|
"text": "Ratnaparkhi et al. (1994)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments on PP Attachment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(V| v, o, p, n ) = exp{\u03c6(v) W p V \u03c6(n)} Z(x) Pr(O| v, o, p, n ) = exp{\u03c6(o) W p O \u03c6(n)} Z(x)", |
|
"eq_num": "(6)" |
|
} |
|
], |
|
"section": "Experiments on PP Attachment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The bilinear model is parameterized by two matrices W V and W O per preposition, each of which captures the compatibility between nouns below a certain preposition and heads of V or O prepositional relations, respectively. Again Z(x) is the normalizer summing over y = {V, O}, but now using the bilinear form. It is straighforward to modify the learning algorithm in Section 3 such that the loss is a negative loglikelihood for binary classification, and the regularizer considers the sum of norms of the model matrices. We ran experiments using the data by Ratnaparkhi et al. (1994) . We trained separate models for different prepositions, focusing on the prepositions that are more ambiguous: for, from, with. We compare to a linear \"maxent\" model following Ratnaparkhi et al. (1994) that uses the same feature set. Figure 3 shows the test results for the linear model, and bilinear models trained with L1, L2, NN regularization penalties. The results of the bilinear models are significantly below the accuracy of the linear model, suggesting that some of the non-lexical features of the linear model (such as prior weighting of the two classes) might be difficult to capture by the bilinear model over lexical representations. To check if the bilinear model might complement the linear model or just be worse than it, we tested simple combinations based on linear interpolations. For a constant \u03bb \u2208 [0, 1] we define:", |
|
"cite_spans": [ |
|
{ |
|
"start": 558, |
|
"end": 583, |
|
"text": "Ratnaparkhi et al. (1994)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 760, |
|
"end": 785, |
|
"text": "Ratnaparkhi et al. (1994)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 818, |
|
"end": 826, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on PP Attachment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "Pr(y | x) = \u03bb Pr L (y | x) + (1 \u2212 \u03bb) Pr B (y | x) .", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Experiments on PP Attachment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We search for the best \u03bb on the validation set, and report results of combining the linear model with each of the three bilinear models. Results are shown also in Figure 3 . Interpolation models improve over linear models, though only the improvement for for is significant (2.6%). Future work should exploit finer combinations between standard linear features and distributional bilinear forms.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 171, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments on PP Attachment", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a model for learning bilexical operators that can leverage both supervised and unsupervised data. The model is based on exploiting bilinear forms over distributional representations. The learning algorithm induces a low-dimensional representation of the lexical space by imposing low-rank constraints on the parameters of the bilinear form. By means of supervision, our model induces two low-dimensional lexical embeddings, one on each side of the bilexical linguistic relation, and computations can be expressed as an inner-product between the two embeddings. This factorized form of the model can have great computational advantages: in many applications one needs to evaluate the function multiple times for a fixed set of lexical items, for example in dependency parsing. Hence, one can first project the lexical items to their embeddings, and then compute all pairwise scores as inner-products. In experiments, we have shown that the embeddings we obtain in a number of linguistic relations can be modeled with a few hidden dimensions. As future work, we would like to apply the low-rank approach to other model forms that can employ lexical embeddings, specially when supervision is available. For example, dependency parsing models, or models of predicate-argument structures representing semantic roles, exploit bilexical relations. In these applications, being able to generalize to word pairs that are not observed during training is essential.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We would also like to study how to combine low-rank bilexical operators, which in essence induce a task-specific representation of words, with other forms of features that capture class or contextual information. One desires that such combinations can preserve the computational advantages behind low-rank embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "To obtain curves for each model type with respect to a range of number of operations, we first obtained the best model on validation data and then forced it to have at most k non-zero features or rank k by projecting, for a range of k values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank the reviewers for their helpful comments. This work was supported by projects XLike (FP7-288342), ERA-Net CHISTERA VISEN and TACARDI (TIN2012-38523-C02-00). Xavier Carreras was supported by the Ram\u00f3n y Cajal program of the Spanish Government (RYC-2008-02223).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Learning to rank with (a lot of) word features", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Grangier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kunihiko", |
|
"middle": [], |
|
"last": "Sadamasa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanjun", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Chapelle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kilian", |
|
"middle": [], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Information Retrieval", |
|
"volume": "13", |
|
"issue": "3", |
|
"pages": "291--314", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Olivier Chapelle, and Kilian Weinberger. 2010. Learning to rank with (a lot of) word features. Information Retrieval, 13(3):291-314, June.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Adaptive importance sampling to accelerate training of a neural probabilistic language model", |
|
"authors": [ |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-S\u00e9bastien", |
|
"middle": [], |
|
"last": "S\u00e9n\u00e9cal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "IEEE Transactions on Neural Networks", |
|
"volume": "19", |
|
"issue": "4", |
|
"pages": "713--722", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshua Bengio and Jean-S\u00e9bastien S\u00e9n\u00e9cal. 2008. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713-722.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Class-based n-gram models of natural language", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"V" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Desouza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mercer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenifer", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Computational Linguistics", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "467--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18:467-479.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "BLLIP 1987-89 WSJ Corpus Release 1, LDC No. LDC2000T43. Linguistic Data Consortium", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Don", |
|
"middle": [], |
|
"last": "Blaheta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niyu", |
|
"middle": [], |
|
"last": "Ge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keith", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, and Mark Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1, LDC No. LDC2000T43. Linguistic Data Consortium.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Large scale online learning of image similarity through ranking", |
|
"authors": [ |
|
{ |
|
"first": "Gal", |
|
"middle": [], |
|
"last": "Chechik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varun", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Uri", |
|
"middle": [], |
|
"last": "Shalit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samy", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1109--1135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gal Chechik, Varun Sharma, Uri Shalit, and Samy Bengio. 2010. Large scale online learning of image similarity through ranking. Journal of Machine Learning Research, pages 1109-1135.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", |
|
"authors": [ |
|
{ |
|
"first": "Ronan", |
|
"middle": [], |
|
"last": "Collobert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th International Conference on Machine Learning, ICML '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, pages 160-167, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Efficient online and batch learning using forward backward splitting", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "2899--2934", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Duchi and Yoram Singer. 2009. Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10:2899-2934.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Using latent semantic analysis to improve access to textual information", |
|
"authors": [ |
|
{ |
|
"first": "Susan", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Dumais", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Furnas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Landauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Deerwester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Harshman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "SIGCHI Conference on Human Factors in Computing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "281--285", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Susan T. Dumais, George W. Furnas, Thomas K. Landauer, Scott Deerwester, and Richard Harshman. 1988. Using latent semantic analysis to improve access to textual information. In SIGCHI Conference on Human Factors in Computing Systems, pages 281-285. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Exceptions in language as learned by the multifactor sparse plus low-rank language model", |
|
"authors": [ |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Hutchinson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mari", |
|
"middle": [], |
|
"last": "Ostendorf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maryam", |
|
"middle": [], |
|
"last": "Fazel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ICASSP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8580--8584", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brian Hutchinson, Mari Ostendorf, and Maryam Fazel. 2013. Exceptions in language as learned by the multi- factor sparse plus low-rank language model. In ICASSP, pages 8580-8584.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "An introduction to latent semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Landauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Foltz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Darrell", |
|
"middle": [], |
|
"last": "Laham", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Discourse Processes", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "259--284", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas K. Landauer, Peter W. Foltz, and Darrell Laham. 1998. An introduction to latent semantic analysis. Discourse Processes, 25:259-284.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Semantic and associative priming in high-dimensional semantic space", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Lund", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Curt", |
|
"middle": [], |
|
"last": "Burgess", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruth", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Atchley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Cognitive Science Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "660--665", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Lund, Curt Burgess, and Ruth A. Atchley. 1995. Semantic and associative priming in high-dimensional semantic space. In Cognitive Science Proceedings, LEA, pages 660-665.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Building a Large Annotated Corpus of English: The Penn Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Mitchell", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Marcus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beatrice", |
|
"middle": [], |
|
"last": "Santorini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Marcinkiewicz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "313--330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary A. Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Three new graphical models for statistical language modelling", |
|
"authors": [ |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 24th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "641--648", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andriy Mnih and Geoffrey E. Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine Learning, pages 641-648.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A scalable hierarchical distributed language model", |
|
"authors": [ |
|
{ |
|
"first": "Andriy", |
|
"middle": [], |
|
"last": "Mnih", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1081--1088", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andriy Mnih and Geoffrey E. Hinton. 2009. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems, pages 1081-1088.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Hierarchical probabilistic neural network language model", |
|
"authors": [ |
|
{ |
|
"first": "Frederic", |
|
"middle": [], |
|
"last": "Morin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "AIS-TATS05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "246--252", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In AIS- TATS05, pages 246-252.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A maximum entropy model for prepositional phrase attachment", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Reynar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the workshop on Human Language Technology, HLT '94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "250--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In Proceedings of the workshop on Human Language Technology, HLT '94, pages 250-255, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces", |
|
"authors": [ |
|
{ |
|
"first": "Magnus", |
|
"middle": [], |
|
"last": "Sahlgren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Magnus Sahlgren. 2006. The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high-dimensional vector spaces. Ph.D. thesis, Stockholm University.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Semisupervised recursive autoencoders for predicting sentiment distributions", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Eric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [ |
|
"Y" |
|
], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--161", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. 2011. Semi- supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151-161. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "From frequency to meaning: Vector space models of semantics", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Artificial Intelligence Research", |
|
"volume": "37", |
|
"issue": "1", |
|
"pages": "141--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37(1):141-188, January.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Towards explicit semantic features using independent component analysis", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Jaakko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timo", |
|
"middle": [], |
|
"last": "V\u00e4yrynen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Honkela", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lindqvist", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jaakko J. V\u00e4yrynen, Timo Honkela, and Lasse Lindqvist. 2007. Towards explicit semantic features using indepen- dent component analysis. In Proceedings of the Workshop Semantic Content Acquisition and Representation (SCAR), Stockholm, Sweden. Swedish Institute of Computer Science. SICS Technical Report T2007-06.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Distance metric learning for large margin nearest neighbor classification", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Kilian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Weinberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "207--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilian Q. Weinberger and Lawrence K. Saul. 2009. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207-244, June.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": ": given a tuple x = v, o, p, n consisting of a verb v, noun object o, preposition Attachment accuracies of linear, bilinear and interpolated models for three prepositions." |
|
}, |
|
"TABREF1": { |
|
"content": "<table/>", |
|
"text": "10 most likely adjectives for some test nouns.", |
|
"num": null, |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |