Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N19-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:02:08.914360Z"
},
"title": "Evaluating Composition Models for Verb Phrase Elliptical Sentence Embeddings",
"authors": [
{
"first": "Gijs",
"middle": [],
"last": "Wijnholds",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary University of London",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Ellipsis is a natural language phenomenon where part of a sentence is missing and its information must be recovered from its surrounding context, as in \"Cats chase dogs and so do foxes.\". Formal semantics has different methods for resolving ellipsis and recovering the missing information, but the problem has not been considered for distributional semantics, where words have vector embeddings and combinations thereof provide embeddings for sentences. In elliptical sentences these combinations go beyond linear as copying of elided information is necessary. In this paper, we develop different models for embedding VP-elliptical sentences. We extend existing verb disambiguation and sentence similarity datasets to ones containing elliptical phrases and evaluate our models on these datasets for a variety of non-linear combinations and their linear counterparts. We compare results of these compositional models to state of the art holistic sentence encoders. Our results show that non-linear addition and a non-linear tensor-based composition outperform the naive non-compositional baselines and the linear models, and that sentence encoders perform well on sentence similarity, but not on verb disambiguation.",
"pdf_parse": {
"paper_id": "N19-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Ellipsis is a natural language phenomenon where part of a sentence is missing and its information must be recovered from its surrounding context, as in \"Cats chase dogs and so do foxes.\". Formal semantics has different methods for resolving ellipsis and recovering the missing information, but the problem has not been considered for distributional semantics, where words have vector embeddings and combinations thereof provide embeddings for sentences. In elliptical sentences these combinations go beyond linear as copying of elided information is necessary. In this paper, we develop different models for embedding VP-elliptical sentences. We extend existing verb disambiguation and sentence similarity datasets to ones containing elliptical phrases and evaluate our models on these datasets for a variety of non-linear combinations and their linear counterparts. We compare results of these compositional models to state of the art holistic sentence encoders. Our results show that non-linear addition and a non-linear tensor-based composition outperform the naive non-compositional baselines and the linear models, and that sentence encoders perform well on sentence similarity, but not on verb disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Compositional distributional semantics has so far relied on a tight connection between syntactic and semantic resources. Based on the assembly principle of compositionality, these models assign a sentence vector by applying a linear map to the individual word embeddings therein. The meaning of \"cats chase dogs\" is as follows in (1) additive, (2) multiplicative, and (3) tensor-based models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) \u2212 \u2212 \u2192 cats + \u2212 \u2212\u2212 \u2192 chase + \u2212\u2212\u2192 dogs (2) \u2212 \u2212 \u2192 cats \u2212 \u2212\u2212 \u2192 chase \u2212\u2212\u2192 dogs (3) \u2212 \u2212 \u2192 cats \u00d7 (chase \u00d7 \u2212\u2212\u2192 dogs)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some linguistic phenomena, however, rely on copying resources while computing meaning; canonical examples thereof are anaphora and ellipsis, exemplified below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(a) Cats clean themselves. (b) Cats chase dogs, children do too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More complex examples involve a structural ambiguity such as the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(c) Cats chase their tail, dogs too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These lend themselves to a strict (dogs chase the cat's tail) and a sloppy reading (dogs chase their own tail). In these examples, the meaning of at least one part of the sentence is used twice, e.g. the subject in a, the verb phrase \"chase dogs\" in b. Such cases can often be extended to a situation in which a meaning is used more than twice, e.g. in \"Cats chase their tail, dogs too, and so do foxes\". In order to develop distributional semantics for such sentences while respecting the principle of compositionality, one has a choice between a linear or a non-linear composition of resources. In the linear case, no information is copied, resulting in vector embeddings such as the following one (when only considering content words):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2212 \u2212 \u2192 cats + \u2212 \u2212\u2212 \u2192 chase + \u2212\u2212\u2192 dogs + \u2212\u2212\u2212\u2212\u2212\u2192 children",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the non-linear case, the necessary resources are copied to resolve the ellipsis, resulting in vectors embeddings such as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2212 \u2212 \u2192 cats + \u2212 \u2212\u2212 \u2192 chase + \u2212\u2212\u2192 dogs + \u2212\u2212\u2212\u2212\u2212\u2192 children + \u2212 \u2212\u2212 \u2192 chase + \u2212\u2212\u2192 dogs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One has the same choice when dealing with multiplicative and tensor-based models. The question is which of these composition frameworks, i.e. linear versus non-linear, provides a better choice for embedding elliptical sentences. To our knowledge, this has remained an open question: although some theoretical work has been done to model verb phrase ellipsis in compositional distributional semantics (Wijnholds and Sadrzadeh, 2018) , none of the existing datasets or evaluation methods for distributional semantics focus on elliptical phenomena.",
"cite_spans": [
{
"start": 400,
"end": 431,
"text": "(Wijnholds and Sadrzadeh, 2018)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we provide some answers. Our starting point is the lambda logical forms of sentences, e.g. those produced by the approach of Dalrymple et al. (1991) , which uses a higher order unification algorithm to resolve ellipsis. We apply to these the lambdas-to-vectors mapping of Sadrzadeh (2016, 2017) to homomorphically map the lambda terms into concrete vector embeddings resulting from a multitude of composition operators, such as addition, multiplication, and tensor-based. We work with four vector spaces (count-based, Word2Vec, GloVe, Fast-Text) and three different verb embeddings, and contrast our compositional models with state of the art holistic sentence encoders.",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "Dalrymple et al. (1991)",
"ref_id": "BIBREF8"
},
{
"start": 287,
"end": 309,
"text": "Sadrzadeh (2016, 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate the sentence embeddings by using them in a verb disambiguation and in a sentence similarity task, created by extending previous SVO tasks from Grefenstette and Sadrzadeh (2011a) and Kartsaklis and Sadrzadeh (2013) to an elliptical setting, and obtaining new human judgements using the Amazon Mechanical Turk crowdsourcing tool. Our experiments show that in both tasks, the models that use a non-linear form of composition perform better than the models whose composition framework is linear, suggesting that resolving ellipsis contributes to the quality of the sentence embedding.",
"cite_spans": [
{
"start": 155,
"end": 189,
"text": "Grefenstette and Sadrzadeh (2011a)",
"ref_id": "BIBREF13"
},
{
"start": 194,
"end": 225,
"text": "Kartsaklis and Sadrzadeh (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Single-Word Embeddings: Distributional semantics on the word level relies on the embedding of word meaning in a vectorial form: by taking context words as the basis of a vector space one computes the vector components of each word by considering its distribution among corpus data. Then a similarity measure is defined on the vector space via the cosine similarity. In a count-based model, the context is taken to be a linear window and the corpus is traversed to collect raw cooccurrence counts. Then, a weighting scheme is applied to smooth the raw frequencies in the meaning representation. More discussion on countbased vector space models can be found in (Turney and Pantel, 2010) , and a systematic study of the parameters of count-based word embeddings is given by (Kiela and Clark, 2014) .",
"cite_spans": [
{
"start": 660,
"end": 685,
"text": "(Turney and Pantel, 2010)",
"ref_id": "BIBREF45"
},
{
"start": 772,
"end": 795,
"text": "(Kiela and Clark, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "With the rise of deep learning techniques, much attention has been given to neural word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) , which try to predict rather than observe, the context of a word by optimising an objective function based on the probability of observing a context.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 122,
"end": 146,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF39"
},
{
"start": 147,
"end": 171,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Compositional Models: The key idea of compositional models is that the meaning of elementary constituents can be combined in a structured way to obtain a representation for larger phrases. In a distributional setting, having a compositional operator is imperative: a data-driven model would not be adequate given the sparsity of full sentences in a corpus. Moreover, it is not clear that sentences follow the distributional hypothesis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Concrete composition operators can roughly be classified as simple and tensor-based. Simple models add or multiply the word vectors to obtain a sentence vector. The work of Mitchell and Lapata (2010) experiments with these models. Tensorbased models differ in that they represent complex words as vectors of a higher order: Baroni and Zamparelli (2010) represents adjectives as matrices which, applied to a word vector produce a vector representation of the compound adjective-noun combination. The account of (Coecke et al., 2010 (Coecke et al., , 2013 Clark, 2015) generalises this to higher-order tensors, e.g. cubes for transitive verbs and hypercubes for ditransitive verbs. The benefit of a type-driven approach over the simple models is that they respect the grammatical structure of sentences: the meaning of \"man bites dog\" is distinct from that of \"dog bites man\" whereas in an additive/multiplicative model they would be identical. The trade-off is that the tensors themselves have to be learnt; where Baroni and Zamparelli (2010) apply regression learning to learn the content of adjective matrices, for transitive verbs there have been several approaches using multistep regression learning , relational learning (Grefenstette and Sadrzadeh, 2011a) , or a combination of co-occurrence information with machine learning techniques (Polajnar et al., 2014a,b; Fried et al., 2015) . A comparative study between count-based and neural embeddings in a compositional setting was carried out by (Milajevs et al., 2014) .",
"cite_spans": [
{
"start": 173,
"end": 199,
"text": "Mitchell and Lapata (2010)",
"ref_id": "BIBREF35"
},
{
"start": 324,
"end": 352,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF0"
},
{
"start": 510,
"end": 530,
"text": "(Coecke et al., 2010",
"ref_id": "BIBREF6"
},
{
"start": 531,
"end": 553,
"text": "(Coecke et al., , 2013",
"ref_id": "BIBREF5"
},
{
"start": 554,
"end": 566,
"text": "Clark, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 1013,
"end": 1041,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF0"
},
{
"start": 1226,
"end": 1261,
"text": "(Grefenstette and Sadrzadeh, 2011a)",
"ref_id": "BIBREF13"
},
{
"start": 1343,
"end": 1369,
"text": "(Polajnar et al., 2014a,b;",
"ref_id": null
},
{
"start": 1370,
"end": 1389,
"text": "Fried et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 1500,
"end": 1523,
"text": "(Milajevs et al., 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Neural composition turns the problem of compositionality around by learning the composition operator instead of predicting the result. Examples are Skip-Thought Vectors (Kiros et al., 2015) , the Distributed Bag of Words model (Le and Mikolov, 2014) , InferSent (Conneau et al., 2017) , and Universal Sentence Encoder (Cer et al., 2018) .",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 227,
"end": 249,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF27"
},
{
"start": 262,
"end": 284,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 318,
"end": 336,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Ellipsis, Formally: There exists many formal approaches to ellipsis and anaphora in the literature. These have generally taken either a syntactic or a semantic form 1 . Examples of the syntactic approaches are in the work of Hendriks and Dekker (1995) ; Morrill and Valent\u00edn (2015) ; J\u00e4ger (2006) ; Kubota and Levine (2017) ; these use directional extensions of categorial grammars that allow for the syntactic types at the site of ellipsis be unified with copies of the types at the antecedent of the elliptical phrase. Another approach deletes the syntactic structure at the ellipsis site and reconstruct it by copying across the antecedent structure (Fiengo and May, 1994; Merchant, 2004) .",
"cite_spans": [
{
"start": 225,
"end": 251,
"text": "Hendriks and Dekker (1995)",
"ref_id": "BIBREF15"
},
{
"start": 254,
"end": 281,
"text": "Morrill and Valent\u00edn (2015)",
"ref_id": "BIBREF36"
},
{
"start": 284,
"end": 296,
"text": "J\u00e4ger (2006)",
"ref_id": "BIBREF17"
},
{
"start": 299,
"end": 323,
"text": "Kubota and Levine (2017)",
"ref_id": "BIBREF25"
},
{
"start": 653,
"end": 675,
"text": "(Fiengo and May, 1994;",
"ref_id": "BIBREF9"
},
{
"start": 676,
"end": 691,
"text": "Merchant, 2004)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Semantic approaches (Dalrymple et al., 1991; Szabolcsi, 1987; Pulman, 1997) assume that ellipsis involves underspecification of content and resolve this by producing a predicate via a suitable abstraction from the antecedent. For instance, the elliptical phrase (b) \"Cats chase dogs, children do too\", will take an initial logical form (b 1 ); a resolution step (b 2 ) provides it with the lambda term in (b 3 ), which constitutes its final semantic form:",
"cite_spans": [
{
"start": 20,
"end": 44,
"text": "(Dalrymple et al., 1991;",
"ref_id": "BIBREF8"
},
{
"start": 45,
"end": 61,
"text": "Szabolcsi, 1987;",
"ref_id": "BIBREF44"
},
{
"start": 62,
"end": 75,
"text": "Pulman, 1997)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "(b 1 ) chase(cats, dogs) \u2227 P (children) (b 2 ) P = \u03bbx.chase(x, dogs) (b 3 ) (b 1 ) ; \u03b2 chase(cats, dogs) \u2227 chase(children, dogs)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The ambiguous example (d) \"Cats chase their tails, dogs too\" is treated similarly, but can now obtain its respective strict and sloppy readings by producing predicates (d 1 ) and (d 2 ) below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "(d 2 ) P = \u03bbx.chase(x, tail(cats)) (d 3 ) P = \u03bbx.chase(x, tail(x))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Mixed syntactic/semantic approaches have also been proposed to cover wider ranges of phenomena; see Kempson et al. (2015) for an overview. The only existing work attempting to join ellipsis analysis with vector embeddings is the proposal of (Kartsaklis et al., 2016) , which is preliminary work and gives unwanted results 2 . Below, we develop a new such approach.",
"cite_spans": [
{
"start": 100,
"end": 121,
"text": "Kempson et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 241,
"end": 266,
"text": "(Kartsaklis et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Vectors and their basic operations can be emulated using a lambda calculus with constants for the relevant operations, as shown in (Muskens and Sadrzadeh, 2016) . They assume a type I (a finite index set) and R (modelling the real numbers) and model any vector as a term of type V := IR; that is, as a function from indices to real numbers. Matrices can then be represented by types M := IIR and in general a tensor of rank n will have type T n := I 1 ...I n R. The standard operations like scalar multiplication, addition, element wise multiplication and tensor contraction can be modelled with lambda terms as follows:",
"cite_spans": [
{
"start": 131,
"end": 160,
"text": "(Muskens and Sadrzadeh, 2016)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "\u2022 := \u03bbrvi.r \u2022 v i : RV V + := \u03bbvwi.v i + w i : V V V := \u03bbvwi.v i \u2022 w i : V V V \u00d7 1 := \u03bbmvij. j m ij \u2022 v j : M V V \u00d7 2 := \u03bbcvijk. k c ijk \u2022 v k : T 3 V M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "The first three definitions above extend the arithmetic operations of addition and multiplication on real numbers in R to lists of numbers in IR and define corresponding definitions on vectors, and so defines the pointwise multiplication of two vectors. The operation \u00d7 1 defines matrix multiplication; \u00d7 2 defines the tensor contraction between a cube c (in The vector semantics of a lambda term m is computed by taking a homomorphic image over the set of its constants c. This image is computed compositionally from the vector or tensor embeddings of the constants c of m via their homomorphic images H(c), whose types are denoted by T (c).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "I 3 R) and a list of numbers v. c H(c) T (c) cn cn V adj \u03bbv.(adj \u00d71 v) V V adv \u03bbv.(adv \u00d71 v) V V itv \u03bbv.(itv \u00d71 v) V V tv \u03bbuv.(tv \u00d72 v) \u00d71 u V V V coord \u03bbP.\u03bbQ.P \u2207Q V V V quant \u03bbvZ.Z(quant \u00d71 v) V (V V )V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "Examples of these are given in Table 1 for a tensor-based composition model, where the boldface c denotes the vector/tensor embedding of c. Using this table, we obtain homomorphic images of any lambda term over the constants. For instance, the lambda term of our exemplary resolved ellipsis phrase (b 3 ) chase(cats, dogs) \u2227 chase(children, dogs) is given the following semantic, obtained by computing H(b 3 ):",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 44,
"text": "Table 1 for a",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "((chase \u00d7 2 dogs) \u00d7 1 cats)\u2207 ((chase \u00d7 2 dogs) \u00d7 1 children)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "The constituents of the H(c) entries of Table 1 are only exemplary. Many other interpretations are possible. For instance, taking vector embeddings for all words and replacing all tensor contractions and \u2207 by + defines a purely additive model. The concrete models for transitive sentences that were evaluated by Milajevs et al. (2014) can all be derived by varying the H(c) entries. Below are the sentences obtained by using the Copy Object (CO), Frobenius Additive (FA), Frobenius Multiplicative (FM) and Frobenius Outer (FO) instantiations of the verb, respectively:",
"cite_spans": [
{
"start": 312,
"end": 334,
"text": "Milajevs et al. (2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "CO : \u03bbos.o (verb \u00d7 s) FA : \u03bbos.s (verb \u00d7 o) + o (verb \u00d7 s) FM : \u03bbos.s (verb \u00d7 o) o (verb \u00d7 s) FO : \u03bbos.s (verb \u00d7 o) \u2297 o (verb \u00d7 s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "The vector semantics of the extensions of transitive sentences with VP elliptical phrases are obtained by taking each of the above as the semantics of each conjunct of the lambda logical form and interpreting the conjunction operation of \u2227 as either sum or multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embeddings for Elliptical Phrases",
"sec_num": "3"
},
{
"text": "For the evaluation of the model(s) in the previous section, we built two new datasets and experimented with count based and neural vector spaces, and sentence encoders. 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "In order to experiment with ellipsis, we extended the verb disambiguation dataset of Grefenstette and Sadrzadeh (2011a) and the transitive sentence similarity dataset of Kartsaklis and Sadrzadeh (2013) , henceforth GS2011 and KS2013.",
"cite_spans": [
{
"start": 85,
"end": 119,
"text": "Grefenstette and Sadrzadeh (2011a)",
"ref_id": "BIBREF13"
},
{
"start": 170,
"end": 201,
"text": "Kartsaklis and Sadrzadeh (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building new datasets",
"sec_num": "4.1"
},
{
"text": "The GS2011 verb disambiguation dataset contains 10 verbs, each with two possible interpretations. For each verb v and its two interpretations v 1 and v 2 , the dataset contains human similarity judgments for 10 subject-object combinations. For instance, for the verb meet -ambiguous between visit and satisfy -the dataset contains the pairs system meet requirements, system satisfy requirements and system meet requirements, system visit requirements . The more likely interpretation is marked as HIGH whereas the unlikely interpretation is marked LOW.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GS2011",
"sec_num": "4.1.1"
},
{
"text": "We extended this dataset as follows: for each combination of a verb triple We selected two new subjects for each combination, and in this way we obtained a dataset of roughly 400 entries. New human judgments were collected through Amazon Mechanical Turk, by prepending the to each noun and putting the phrase in the past tense. As with the original dataset, participants were asked to judge the similarity between sentence pairs using a discrete number between 1 and 7; 1 for highly dissimilar, 7 for highly similar. By inserting gold standard pairs of identical sentences we checked if participants were trustworthy. We collected 25 judgments per sentence pair but excluded participants that annotated less than 20 entries of the total dataset. We ended up with 55 different participants who ranked more than 20 entries of the total dataset, to give a final amount of ca. 9200 annotations. As an example, the verb show was a very hard case to disambiguate in the GS2011 dataset: child show sign had an average score of 2.5 with both child picture sign and child express sign. In the new dataset, with the extra subject patient, it got much clearer that the verb had to be interpreted as express with an average score of 5.869, versus 4.875 for picture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GS2011",
"sec_num": "4.1.1"
},
{
"text": "(v, v 1 , v 2 ) and a subject-object pair (s, o), where s v o, s v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GS2011",
"sec_num": "4.1.1"
},
{
"text": "The KS2013 sentence similarity dataset contains 108 transitive sentence pairs annotated with human similarity judgments. As opposed to the GS2011 dataset, subjects and objects of each sentence pair are not the same, so several different contexts get compared to one another. In this sense, the KS2013 dataset aims to investigate the role of content of individual words versus the role of composition, as the similarity of sentences might be predictable from the contribution of individual words rather than the specific way of composing them. We extend this dataset to cover VP ellipsis by following a similar procedure as for GS2011. For each transitive sentence of the form s v o in the dataset, we selected a new subject s * from a list of most frequent subjects of the verb 5 and built elliptical entries s v o and s * does too in such a way that the meaning of the original transitive sentence got changed as little as possible and that the resulting elliptical phrase made sense. We then considered every transitive sentence pair in the dataset and added the new respective subjects to both sentences. For example, for the pair school encourage child, employee leave company we selected parent and student to get the new pair school encourage child and parent does too, employee leave company and student does too",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KS2013",
"sec_num": "4.1.2"
},
{
"text": "We chose two subjects for every original sentence, generating four possibilities for each sentence pair, and a new dataset of 432 entries. This dataset was also annotated using Amazon Mechanical Turk, after putting each verb in the past tense and prepending the to each noun in the 5 Again taken from the ukWaC+Wackypedia corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KS2013",
"sec_num": "4.1.2"
},
{
"text": "dataset. Gold standard pairs of identical sentences were inserted to validate trustworthiness of participants. The final dataset contains ca. 9800 annotations by 42 different participants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "KS2013",
"sec_num": "4.1.2"
},
{
"text": "To provide a comprehensive study with robust results, we used four vector spaces: a count based vector space, and newly trained Word2Vec, GloVe, and FastText spaces, as detailed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Spaces",
"sec_num": "4.2"
},
{
"text": "Count-Based: We used the combined ukWaC and Wackypedia corpora 6 to extract raw cooccurrence counts, using as a basis the 2000 most frequently occurring tokens (after excluding the 50 most frequent ones). When extracting counts, we disregarded a list of stopwords that do not contribute to the content of the vectors. We used a context window of 5 around the focus word, and PPMI as weighting scheme. These settings were use in the original KS2013 dataset (Kartsaklis and Sadrzadeh, 2013) .",
"cite_spans": [
{
"start": 456,
"end": 488,
"text": "(Kartsaklis and Sadrzadeh, 2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Spaces",
"sec_num": "4.2"
},
{
"text": "Word2Vec: The Word2Vec embeddings we used were trained with the continuous bag of words model of (Mikolov et al., 2013) (CBOW). We trained this model on the combined and lemmatised ukWaC and Wackypedia corpora, using the implementation for Python available in the gensim package 7 , with a minimum word frequency of 50, a window of 5, dimensionality 300, and 5 training iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Spaces",
"sec_num": "4.2"
},
{
"text": "GloVe: The GloVe model (Pennington et al., 2014) considers the ratio of co-occurrence probabilities by minimising the least-squares objective between the dot product of two word embeddings and the log-probability of the words' cooccurrence. We trained a GloVe space on the combined and lemmatised ukWaC and Wackypedia corpora, using the code provided by the original authors 8 . Similar to the Word2Vec settings above, we trained 300 dimensional vectors with a minimum word frequency of 50 and a window of 5, but we trained with 15 iterations.",
"cite_spans": [
{
"start": 23,
"end": 48,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Spaces",
"sec_num": "4.2"
},
{
"text": "FastText: The FastText vectors are like Word2Vec, except the word vector takes into account subword information: words are represented as n-grams, for which vectors are trained. The final word vector will then be the sum of its constituent n-gram vectors (Bojanowski et al., 2017) . We trained a FastText space with the same settings as the Word2Vec space (CBOW, minimum word frequency 50, dimensions 300, window 5, with 5 iterations), again using gensim.",
"cite_spans": [
{
"start": 255,
"end": 280,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vector Spaces",
"sec_num": "4.2"
},
{
"text": "In order to work with tensor-based models we had to represent verbs as matrices rather than as vectors. We generated verb tensors using two methods that have been used previously in the literature (Grefenstette and Sadrzadeh, 2011a; .",
"cite_spans": [
{
"start": 197,
"end": 232,
"text": "(Grefenstette and Sadrzadeh, 2011a;",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "Relational: For each verb, its corresponding matrix is obtained by summing over the tensor product of the respective subject and object vectors of the verb (subjects and objects collected from the corpus):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "verb = i subj i \u2297 obj i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "Kronecker: For each verb, its corresponding matrix is obtained by taking the tensor product of the verb vector with itself:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "verb = \u2212 \u2212 \u2192 verb \u2297 \u2212 \u2212 \u2192 verb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "In the case of the count-based space, we trained verb matrices of dimensions 2000 \u00d7 2000, for the neural word embeddings the matrices had dimensions 300 \u00d7 300. We also experimented with the skip-gram extension of Maillard and Clark (2015) and the plausibility model of Polajnar et al. (2014a) but excluded the results because the obtained verb matrices were far below par.",
"cite_spans": [
{
"start": 213,
"end": 238,
"text": "Maillard and Clark (2015)",
"ref_id": "BIBREF28"
},
{
"start": 269,
"end": 292,
"text": "Polajnar et al. (2014a)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Verb Matrices",
"sec_num": "4.2.1"
},
{
"text": "For the experiments, we had two main goals in mind: primarily, we wanted to verify that resolving ellipsis contributes to the performance of a compositional model. For this purpose we experimented with non-linear models, i.e. models that resolve the ellipsis (and thus use the verb and object resources twice) versus linear models, which do not resolve the ellipsis (and thus only use the verb and object once). Our second goal was to investigate whether amongst the models that resolve the ellipsis, the ones that did so in a tensor-based way, i.e. using tensors instead of vectors to represent the verbs, performed better than additive and multiplicative models, and how these compare to holistic sentence encoders. Hence, we considered three classes of models: linear vector models, nonlinear vector models and tensor-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "Linear Vector Models: These models use every resource exactly once, following the pattern \u2212 \u2192 w 1 \u2212 \u2192 w 2 ... \u2212 \u2192 w n for any sequence of words w 1 w 2 ...w n . For an elliptical phrase \"subj verb obj and subj * does too\" it will compute the vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "\u2212 \u2212 \u2192 subj \u2212 \u2212 \u2192 verb \u2212 \u2192 obj \u2212 \u2212 \u2192 and \u2212 \u2212\u2212 \u2192 subj * \u2212 \u2212 \u2192 does \u2212 \u2192 too",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "where denotes either addition or multiplication. Non-Linear Vector Models: Here, the assumption is that ellipsis is resolved but models do not respect word order. The meaning of \"subj verb obj and subj * does too\" now is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "\u2212 \u2212 \u2192 subj \u2212 \u2212 \u2192 verb \u2212 \u2192 obj \u2212 \u2212\u2212 \u2192 subj * \u2212 \u2212 \u2192 verb \u2212 \u2192 obj",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "Tensor-Based Models: These models all are assumed to resolve ellipsis and are based on various previous models (Grefenstette and Sadrzadeh, 2011b,a; Kartsaklis et al., 2012; . Essentially, the tensor-based meaning of \"subj verb obj and subj * does too\" is",
"cite_spans": [
{
"start": 111,
"end": 148,
"text": "(Grefenstette and Sadrzadeh, 2011b,a;",
"ref_id": null
},
{
"start": 149,
"end": 173,
"text": "Kartsaklis et al., 2012;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "T ( \u2212 \u2212 \u2192 subj, verb, \u2212 \u2192 obj) T ( \u2212 \u2212\u2212 \u2192 subj * , verb, \u2212 \u2192 obj)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "where T is a transitive model from (Milajevs et al., 2014) and interprets the conjunction of the two subclauses. For the verb matrix we used either the relational verb or the Kronecker verb, and for we tried both addition and multiplication. We did consider a model which simply adds or multiplies the second subject without duplicating the verb phrase, but it performed worse than non-linear addition and multiplication so we did not include it in this paper. Sentence Encoders: To compare the mentioned compositional models with state of the art neural baselines, we carried out our experiments with a four types of holistic sentence encoders, that take arbitrary text as input and produce an embedding. To properly compare with the compositional models above, we gave three different inputs to the encoders: a baseline encoding (Base), a resolved encoding (Res), and an encoding without functional words (Abl), all as below:",
"cite_spans": [
{
"start": 35,
"end": 58,
"text": "(Milajevs et al., 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "Base: \"subj verb obj and subj * does too\" Res: \"subj verb obj and subj * verb obj\" Abl:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "\"subj verb obj subj * \" We used six concrete pretrained encoders, available online: 4800-dimensional embeddings from the Skip-Thought model 9 , 300-dimensional embeddings from two Doc2Vec implementations (Lau and Baldwin, 2016) 10 , 4096-dimensional embeddings from two InferSent encoders 11 , and 512-dimensional embeddings from Universal Sentence Encoder 12 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concrete Models",
"sec_num": "4.3"
},
{
"text": "To validate the quality of the trained word spaces, we evaluate on several standard word similarity tasks: we used Rubenstein & Goodenough (RG, 1965 ), WordSim353 (WS353, 2001 ), Miller & Charles (MC, 1991 , SimLex-999 (SL999, 2015) , and the MEN dataset (Bruni et al., 2012) . The results are displayed in Table 2 , for the spaces described in the previous section. Table 2 : Spearman \u03c1 scores on word similarity tasks.",
"cite_spans": [
{
"start": 115,
"end": 148,
"text": "Rubenstein & Goodenough (RG, 1965",
"ref_id": null
},
{
"start": 149,
"end": 175,
"text": "), WordSim353 (WS353, 2001",
"ref_id": null
},
{
"start": 176,
"end": 205,
"text": "), Miller & Charles (MC, 1991",
"ref_id": null
},
{
"start": 208,
"end": 232,
"text": "SimLex-999 (SL999, 2015)",
"ref_id": null
},
{
"start": 255,
"end": 275,
"text": "(Bruni et al., 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 307,
"end": 314,
"text": "Table 2",
"ref_id": null
},
{
"start": 367,
"end": 374,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Verb Disambiguation: Table 3 shows the results of the linear, non-linear and tensor-based models for this task, compared against a baseline in which only the verb vector or verb matrix is compared. Our first observation is that generally, the highest performing models were tensor-based. The highest found correlation score was 0.5385 in the count based space for a tensor-based model (CO model above, Kronecker matrix, \u2207 = +), with the Frobenius Additive model giving the second best result of 0.5263 (FA model above, Kronecker matrix, \u2207 = +). For the neural spaces, the highest performing models were mostly tensor-based as well; they were always the Frobenius Additive (FA) model and the Frobenius Outer (FO) model, using the relational tensor and addition for the coordinator, except in the case of GloVe, where the Copy Object (CO) model was the second best. The only exception to this observation is the GloVe space, for which the baseline Vector Only model in fact has a higher correlation than any other model on that space. Our second observation is that the non-linear variants of the additive and multiplicative models (which resolve ellipsis but in a naive way) show an increased performance over the linear models (which do not resolve ellipsis). All of this holds for all the four vector spaces, except for the Fast-Text space where the linear multiplicative model achieves significantly higher correlation (0.2928) than its non-linear counterpart (0.0440). Overall, these results suggests that a logical resolving of ellipsis and further grammatical sensitivity benefits the performance of composition.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "One interesting fact about our results is that the best compositional methods across the board were those that interpret the coordinator 'and' as addition; in set-theoretic semantics one interprets this coordinator as set intersection, which corresponds to multiplication rather than addition in a vectorial setting. We suggest that the feature intersection approach using multiplication leads to sparsity in the resulting vectorial representation, which then has a negative effect on the overall result. This would explain the case of FastText, since those vectors take into account subword information one would expect them to be more finegrained and therefore conflate more of their features under multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The choice of verb matrix was mixed: for the count-based models the Kronecker matrix worked best, for the neural embeddings it was best to use the relational matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In comparison, the sentence encoder results of Table 4 show the same trend that suggests that resolving ellipsis improves the quality of the embeddings: with the exception of the two InferSent encoders, the resolved models gave a higher correlation than their linear baseline. However, none of the encoder models come near the results achieved using the compositional models. Since the verb disambiguation dataset contains pairs of sentence that only differ in the verb, the task becomes very much grammar-oriented, and so we argue that the tensor-based models work better since they explicitly emphasise syntactic structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Sentence Similarity: For the extension of the KS2013 sentence similarity dataset, the results are shown in Table 5 . We again wanted to see if resolving ellipsis benefits the compositional process. This was in general true, although we observed a different pattern to the previous experiment.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In all cases, except for the FastText space, we saw that non-linear models in fact perform better than their linear counterparts. But this time the best tensor-based models only outperformed addition for the count-based space: the best models scored 0.7410 and 0.7370 (respectively for the FO and FA models above, Kronecker matrix, \u2207 = ). Both Word2Vec and GloVe worked best with a non-linear additive model, with Word2Vec achieving the overall highest correlation score of 0.7617, and GloVe achieving 0.7103. For FastText, the highest score of 0.7408 was achieved by linear addition. What is more, the multiplicative model did not benefit from a non-linear approach in the case of GloVe (from 0.3666 to 0.2439), and the additive model had a similar decline in performance for the count-based space (from 0.7000 to 0.6808) and FastText (0.7408 to 0.7387). We can see that for the neural word embeddings the additive models work best, with all of them seeing a drop in performance for the tensor-based models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Again, the best count-based models use the Kronecker matrix whereas the neural models benefit the most from using the relational matrix. However, this time the best count-based models used multiplication for coordination, the neural models preferring addition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "The sentence encoders worked a lot better in the similarity task, with all non-linear resolved models outperforming the baseline model, and the In-ferSent model even outperforming non-linear ad- Table 6 : Spearman \u03c1 scores for the ellipsis similarity experiment. D2V1: Doc2Vec1, D2V2: Doc2Vec 2, ST: Skip-Thought, IS1: InferSent 1, IS2: InferSent 2, USE: Universal Sentence Encoder.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "dition on a Word2Vec space. We argue this is the case for two reasons: first, the similarity dataset is more diffuse than the verb disambiguation dataset since sentence pairs now differ for every word in the sentence, giving more opportunity to exploit semantic similarity rather than syntactic similarity. Second, the embeddings from the sentence encoder are larger (4096), allowing them to effectively store more information to benefit the similarity score. Overall we conclude again that resolving ellipsis improves the performance of composition, but this time the InferSent sentence encoder seems to work best, followed by the non-linear additive compositional model on Word2Vec, with tensorbased models only performing well in a countbased space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this paper we experimented with vector space semantics for VP ellipsis, working with a large variety of compositional models. We created two new datasets and compared the performance of several compositional methods, both linear and non-linear, across four vector spaces, and against state of the art holistic sentence encoders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our main conclusion is that resolving ellipsis improves performance: non-linear models almost always performed better than linear ones in both a verb disambiguation and a sentence similarity task. The highest performance on the verb disambiguation task was given by a grammar-driven, tensor-based model in a count-based vector space, whereas for the similarity task, the highest performance was achieved by the InferSent sentence encoder, followed by a non-linear additive model on a Word2Vec space. Although the neural word embeddings and sentence encoders were largely outperformed on the disambiguation dataset that places more emphasis on syntactic structure than on semantic similarity, they generally performed better in the sentence similarity case, where the distinction between syntactic and semantic similarity is more diffuse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Although pragmatics approaches exist(Merchant, 2010), we focus here on syntactic and semantic approaches.2 The meaning of \"Bill brought apples and John pears\" coincides with that of \"Bill and John brought apples and pears\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All models, the new datasets, and evaluation code are available at github.com/gijswijnholds/ compdisteval-ellipsis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As found in the combined ukWaC+WackyPedia corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "wacky.sslmit.unibo.it 7 radimrehurek.com/gensim 8 nlp.stanford.edu/projects/glove",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/ryankiros/skip-thoughts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/jhlau/doc2vec 11 github.com/facebookresearch/InferSent 12 tfhub.dev/google/ universal-sentence-encoder",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors gratefully acknowledge the three anonymous reviewers for their valuable comments.Mehrnoosh Sadrzadeh is grateful to the Royal Society for an International Exchange Award IE161631 -Dynamic Vector Semantics for Lambda Calculus Models of Natural Language and discussion with Reinhard Muskens in this context. Gijs Wijnholds would like to express gratitude for support by a Queen Mary Principal Studentship, and the Theory group of the School of Electronic Engineering and Computer Science at Queen Mary University of London. Both authors would like to thank Ruth Kempson and Matthew Purver for many helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "5",
"issue": "1",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion of Computational Linguistics, 5(1):135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Distributional semantics in technicolor",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Nam-Khanh",
"middle": [],
"last": "Tran",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "136--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguis- tics: Long Papers-Volume 1, pages 136-145. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-Cespedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11175"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Vector space models of lexical meaning, chapter 16",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1002/9781118882139.ch16"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Clark. 2015. Vector space models of lexical meaning, chapter 16. John Wiley & Sons, Ltd.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lambek vs. Lambek: Functorial vector space semantics and string diagrams for Lambek calculus",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2013,
"venue": "Annals of Pure and Applied Logic",
"volume": "164",
"issue": "11",
"pages": "1079--1100",
"other_ids": {
"DOI": [
"10.1016/j.apal.2013.05.009"
]
},
"num": null,
"urls": [],
"raw_text": "Bob Coecke, Edward Grefenstette, and Mehrnoosh Sadrzadeh. 2013. Lambek vs. Lambek: Functorial vector space semantics and string diagrams for Lam- bek calculus. Annals of Pure and Applied Logic, 164(11):1079-1100.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mathematical foundations for a compositional distributional model of meaning",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1003.4394"
]
},
"num": null,
"urls": [],
"raw_text": "Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a com- positional distributional model of meaning. arXiv preprint arXiv:1003.4394.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ellipsis and higher-order unification",
"authors": [
{
"first": "Mary",
"middle": [],
"last": "Dalrymple",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Stuart",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1991,
"venue": "Linguistics and Philosophy",
"volume": "14",
"issue": "4",
"pages": "399--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary Dalrymple, Stuart M Shieber, and Fernando CN Pereira. 1991. Ellipsis and higher-order unification. Linguistics and Philosophy, 14(4):399-452.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Indices and identity",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Fiengo",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Fiengo and Robert May. 1994. Indices and identity. MIT Press, Cambridge, Mass.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Placing search in context: The concept revisited",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Finkelstein",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Yossi",
"middle": [],
"last": "Matias",
"suffix": ""
},
{
"first": "Ehud",
"middle": [],
"last": "Rivlin",
"suffix": ""
},
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "Gadi",
"middle": [],
"last": "Wolfman",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "406--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th In- ternational Conference on World Wide Web, pages 406-414. ACM.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Low-Rank tensors for verbs in compositional distributional semantics",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fried",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Polajnar",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "731--736",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2120"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Fried, Tamara Polajnar, and Stephen Clark. 2015. Low-Rank tensors for verbs in compositional distributional semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), volume 2, pages 731-736.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multi-Step regression learning for compositional distributional semantics",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Yao-Zhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-Step regression learning for composi- tional distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011a. Experimental support for a categorical com- positional distributional model of meaning. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1394-1404. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Experimenting with transitive verbs in a Dis-CoCat",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "62--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011b. Experimenting with transitive verbs in a Dis- CoCat. In Proceedings of the GEMS 2011 Work- shop on GEometrical Models of Natural Language Semantics, pages 62-66. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Links without locations",
"authors": [
{
"first": "Herman",
"middle": [],
"last": "Hendriks",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Dekker",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Tenth Amsterdam Colloquium",
"volume": "",
"issue": "",
"pages": "339--358",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herman Hendriks and Paul Dekker. 1995. Links with- out locations. In Proceedings of the Tenth Amster- dam Colloquium, pages 339-358. Citeseer.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "4",
"pages": "665--695",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00237"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Anaphora and type logical grammar",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "J\u00e4ger",
"suffix": ""
}
],
"year": 2006,
"venue": "Trends in Logic",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/1-4020-3905-0"
]
},
"num": null,
"urls": [],
"raw_text": "Gerhard J\u00e4ger. 2006. Anaphora and type logical gram- mar, volume 24. Trends in Logic. Springer Science & Business Media.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Verb phrase ellipsis using Frobenius algebras in categorical compositional distributional semantics",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2016,
"venue": "European Summer School on Logic, Language and Information",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis, Matthew Purver, and Mehrnoosh Sadrzadeh. 2016. Verb phrase ellipsis using Frobe- nius algebras in categorical compositional distribu- tional semantics. DSALT Workshop, European Sum- mer School on Logic, Language and Information.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Prior disambiguation of word tensors for constructing sentence vectors",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1590--1601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2013. Prior disambiguation of word tensors for construct- ing sentence vectors. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1590-1601.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A study of entanglement in a categorical framework of natural language",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th Workshop on Quantum Physics and Logic (QPL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. In Proceedings of the 11th Work- shop on Quantum Physics and Logic (QPL). Kyoto Japan.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A unified sentence space for categorical distributional-compositional semantics: Theory and experiments",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 24th International Conference on Computational Linguistics (COLING): Posters",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2012. A unified sentence space for categor- ical distributional-compositional semantics: Theory and experiments. In Proceedings of 24th Inter- national Conference on Computational Linguistics (COLING): Posters. Mumbai, India.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ellipsis",
"authors": [
{
"first": "Ruth",
"middle": [],
"last": "Kempson",
"suffix": ""
},
{
"first": "Ronnie",
"middle": [],
"last": "Cann",
"suffix": ""
},
{
"first": "Arash",
"middle": [],
"last": "Eshghi",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Gregoromichelaki",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2015,
"venue": "Handbook of Contemporary Semantic Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruth Kempson, Ronnie Cann, Arash Eshghi, Eleni Gregoromichelaki, and Matthew Purver. 2015. El- lipsis. In S. Lappin and C. Fox, editors, Hand- book of Contemporary Semantic Theory, 2nd edi- tion, chapter 4. Wiley.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A systematic study of semantic vector space model parameters",
"authors": [
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)",
"volume": "",
"issue": "",
"pages": "21--30",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1503"
]
},
"num": null,
"urls": [],
"raw_text": "Douwe Kiela and Stephen Clark. 2014. A systematic study of semantic vector space model parameters. In Proceedings of the 2nd Workshop on Continu- ous Vector Space Models and their Compositionality (CVSC), pages 21-30.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Skip-Thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-Thought vectors. In Advances in Neural Information Processing Sys- tems, pages 3294-3302.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Pseudogapping as pseudo-VP-ellipsis",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Kubota",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Levine",
"suffix": ""
}
],
"year": 2017,
"venue": "Linguistic Inquiry",
"volume": "48",
"issue": "2",
"pages": "213--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yusuke Kubota and Robert Levine. 2017. Pseudo- gapping as pseudo-VP-ellipsis. Linguistic Inquiry, 48(2):213-257.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An empirical evaluation of doc2vec with practical insights into document embedding generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1609"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An empiri- cal evaluation of doc2vec with practical insights into document embedding generation. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 78-86.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning adjective meanings with a tensor-based skip-gram model",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Maillard",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "327--331",
"other_ids": {
"DOI": [
"10.18653/v1/K15-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Jean Maillard and Stephen Clark. 2015. Learning adjective meanings with a tensor-based skip-gram model. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning, pages 327-331.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Fragments and ellipsis. Linguistics and Philosophy",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Merchant",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "27",
"issue": "",
"pages": "661--738",
"other_ids": {
"DOI": [
"10.1007/s10988-005-7378-3"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Merchant. 2004. Fragments and ellipsis. Lin- guistics and Philosophy, 27:661-738.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Three types of ellipsis",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Merchant",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Merchant. 2010. Three types of ellipsis.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Evaluating neural word representations in tensor-based compositional settings",
"authors": [
{
"first": "Dmitrijs",
"middle": [],
"last": "Milajevs",
"suffix": ""
},
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "708--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compo- sitional settings. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 708-719.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Contextual correlates of semantic similarity",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Charles",
"suffix": ""
}
],
"year": 1991,
"venue": "Language and Cognitive Processes",
"volume": "6",
"issue": "1",
"pages": "1--28",
"other_ids": {
"DOI": [
"10.1080/01690969108406936"
]
},
"num": null,
"urls": [],
"raw_text": "George A Miller and Walter G Charles. 1991. Contex- tual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1-28.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "8",
"pages": "1388--1429",
"other_ids": {
"DOI": [
"10.1111/j.1551-6709.2010.01106.x"
]
},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Sci- ence, 34(8):1388-1429.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Computational coverage of TLG: Nonlinearity",
"authors": [
{
"first": "Glyn",
"middle": [],
"last": "Morrill",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Valent\u00edn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NLCS'15. Third Workshop on Natural Language and Computer Science",
"volume": "32",
"issue": "",
"pages": "51--63",
"other_ids": {
"DOI": [
"10.29007/96j5"
]
},
"num": null,
"urls": [],
"raw_text": "Glyn Morrill and Oriol Valent\u00edn. 2015. Computational coverage of TLG: Nonlinearity. In Proceedings of NLCS'15. Third Workshop on Natural Language and Computer Science, volume 32, pages 51-63. EasyChair Publications.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Context update for lambdas and vectors",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Muskens",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Logical Aspects of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "247--254",
"other_ids": {
"DOI": [
"10.1007/978-3-662-53826-5_15"
]
},
"num": null,
"urls": [],
"raw_text": "Reinhard Muskens and Mehrnoosh Sadrzadeh. 2016. Context update for lambdas and vectors. In Interna- tional Conference on Logical Aspects of Computa- tional Linguistics, pages 247-254. Springer.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Lambdas, vectors, and word meaning in context",
"authors": [
{
"first": "Reinhard",
"middle": [],
"last": "Muskens",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 21st Amsterdam Colloquium",
"volume": "",
"issue": "",
"pages": "65--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reinhard Muskens and Mehrnoosh Sadrzadeh. 2017. Lambdas, vectors, and word meaning in context. In Proceedings of the 21st Amsterdam Colloquium, pages 65-74.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Reducing dimensions of tensors in typedriven distributional semantics",
"authors": [
{
"first": "Tamara",
"middle": [],
"last": "Polajnar",
"suffix": ""
},
{
"first": "Luana",
"middle": [],
"last": "Fagarasan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1036--1046",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1111"
]
},
"num": null,
"urls": [],
"raw_text": "Tamara Polajnar, Luana Fagarasan, and Stephen Clark. 2014a. Reducing dimensions of tensors in type- driven distributional semantics. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1036- 1046.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Using sentence plausibility to learn the semantics of transitive verbs",
"authors": [
{
"first": "Tamara",
"middle": [],
"last": "Polajnar",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1411.7942"
]
},
"num": null,
"urls": [],
"raw_text": "Tamara Polajnar, Laura Rimell, and Stephen Clark. 2014b. Using sentence plausibility to learn the semantics of transitive verbs. arXiv preprint arXiv:1411.7942.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Higher order unification and the interpretation of focus. Linguistics and Philosophy",
"authors": [
{
"first": "G",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "20",
"issue": "",
"pages": "73--115",
"other_ids": {
"DOI": [
"10.1023/A:1005394619746"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen G. Pulman. 1997. Higher order unification and the interpretation of focus. Linguistics and Philoso- phy, 20(1):73-115.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Contextual correlates of synonymy",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Rubenstein",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodenough",
"suffix": ""
}
],
"year": 1965,
"venue": "Communications of the ACM",
"volume": "8",
"issue": "10",
"pages": "627--633",
"other_ids": {
"DOI": [
"10.1145/365628.365657"
]
},
"num": null,
"urls": [],
"raw_text": "Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communica- tions of the ACM, 8(10):627-633.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Bound variables in syntax (are there any?)",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Szabolcsi",
"suffix": ""
}
],
"year": 1987,
"venue": "Sixth Amsterdam Colloquium Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Szabolcsi. 1987. Bound variables in syntax (are there any?). Sixth Amsterdam Colloquium Proceed- ings.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "From frequency to meaning: Vector space models of semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {
"DOI": [
"10.1613/jair.2934"
]
},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Classical copying versus quantum entanglement in natural language: The case of VP-ellipsis",
"authors": [
{
"first": "Gijs",
"middle": [],
"last": "Wijnholds",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2018,
"venue": "EPTCS Proceedings of the Workshop on Compositional Approaches for Physics",
"volume": "283",
"issue": "",
"pages": "103--119",
"other_ids": {
"DOI": [
"10.4204/EPTCS.283.8"
]
},
"num": null,
"urls": [],
"raw_text": "Gijs Wijnholds and Mehrnoosh Sadrzadeh. 2018. Clas- sical copying versus quantum entanglement in natu- ral language: The case of VP-ellipsis. In EPTCS Proceedings of the Workshop on Compositional Ap- proaches for Physics, NLP, and Social Sciences, vol- ume 283, pages 103-119.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": ""
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null,
"text": "1 o is expected to have LOW similarity in the dataset and s v o, s v 2 o is thus expected to have HIGH similarity, we selected a new subject s * from the list of most frequent subjects for the verb v 2 such that it was significantly more frequent for v 2 than for v 1 4 . By doing so we strengthened the disambiguating effect of the context for each verb. The subject was selected such that the resulting elliptical phrase pairs made sense. For each combination and new subject considered, we added the two sentence pairs in the elliptical form s v o and s"
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"5\">: Spearman \u03c1 scores for the ellipsis disambigua-</td></tr><tr><td colspan=\"6\">tion experiment. CB: count based, W2V: Word2Vec,</td></tr><tr><td colspan=\"2\">FT: FastText.</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">D2V1 D2V2</td><td>ST</td><td>IS1</td><td>IS2</td><td>USE</td></tr><tr><td colspan=\"2\">Base .1448</td><td colspan=\"4\">.2432 -.1932 .3471 .3841 .2693</td></tr><tr><td>Res</td><td>.2340</td><td colspan=\"4\">.2980 -.1720 .3436 .3373 .2770</td></tr><tr><td>Abl</td><td>.1899</td><td colspan=\"4\">.2423 -.1297 .3525 .3571 .2402</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td>: Spearman \u03c1 scores for the ellipsis disambigua-</td></tr><tr><td>tion experiment. D2V1: Doc2Vec1, D2V2: Doc2Vec</td></tr><tr><td>2, ST: Skip-Thought, IS1: InferSent 1, IS2: InferSent</td></tr><tr><td>2, USE: Universal Sentence Encoder.</td></tr></table>",
"html": null,
"num": null,
"text": ""
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">D2V1 D2V2</td><td>ST</td><td>IS1</td><td>IS2</td><td>USE</td></tr><tr><td colspan=\"2\">Base .5901</td><td colspan=\"4\">.6188 .5851 .7785 .7009 .6463</td></tr><tr><td>Res</td><td>.6878</td><td colspan=\"4\">.6875 .6039 .8022 .7486 .6791</td></tr><tr><td>Abl</td><td>.1840</td><td colspan=\"4\">.6599 .4715 .7815 .7301 .6397</td></tr></table>",
"html": null,
"num": null,
"text": "Spearman \u03c1 scores for the ellipsis similarity experiment."
}
}
}
}