Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:28.149193Z"
},
"title": "Towards a Formal Distributional Semantics: Simulating Logical Calculi with Tensors",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oxford",
"location": {
"addrLine": "Parks Road Oxford",
"postCode": "OX1 3QD",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The development of compositional distributional models of semantics reconciling the empirical aspects of distributional semantics with the compositional aspects of formal semantics is a popular topic in the contemporary literature. This paper seeks to bring this reconciliation one step further by showing how the mathematical constructs commonly used in compositional distributional models, such as tensors and matrices, can be used to simulate different aspects of predicate logic. This paper discusses how the canonical isomorphism between tensors and multilinear maps can be exploited to simulate a full-blown quantifier-free predicate calculus using tensors. It provides tensor interpretations of the set of logical connectives required to model propositional calculi. It suggests a variant of these tensor calculi capable of modelling quantifiers, using few non-linear operations. It finally discusses the relation between these variants, and how this relation should constitute the subject of future work.",
"pdf_parse": {
"paper_id": "S13-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "The development of compositional distributional models of semantics reconciling the empirical aspects of distributional semantics with the compositional aspects of formal semantics is a popular topic in the contemporary literature. This paper seeks to bring this reconciliation one step further by showing how the mathematical constructs commonly used in compositional distributional models, such as tensors and matrices, can be used to simulate different aspects of predicate logic. This paper discusses how the canonical isomorphism between tensors and multilinear maps can be exploited to simulate a full-blown quantifier-free predicate calculus using tensors. It provides tensor interpretations of the set of logical connectives required to model propositional calculi. It suggests a variant of these tensor calculi capable of modelling quantifiers, using few non-linear operations. It finally discusses the relation between these variants, and how this relation should constitute the subject of future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The topic of compositional distributional semantics has been growing in popularity over the past few years. This emerging sub-field of natural language semantic modelling seeks to combine two seemingly orthogonal approaches to modelling the meaning of words and sentences, namely formal semantics and distributional semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These approaches, summarised in Section 2, differ in that formal semantics, on the one hand, pro-vides a neatly compositional picture of natural language meaning, reducing sentences to logical representations; one the other hand, distributional semantics accounts for the ever-present ambiguity and polysemy of words of natural language, and provides tractable ways of learning and comparing word meanings based on corpus data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent efforts, some of which are briefly reported below, have been made to unify both of these approaches to language modelling to produce compositional distributional models of semantics, leveraging the learning mechanisms of distributional semantics, and providing syntax-sensitive operations for the production of representations of sentence meaning obtained through combination of corpus-inferred word meanings. These efforts have been met with some success in evaluations such as phrase similarity tasks (Mitchell and Lapata, 2008; Mitchell and Lapata, 2009; Grefenstette and Sadrzadeh, 2011; Kartsaklis et al., 2012) , sentiment prediction (Socher et al., 2012) , and paraphrase detection (Blacoe and Lapata, 2012) .",
"cite_spans": [
{
"start": 510,
"end": 537,
"text": "(Mitchell and Lapata, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 538,
"end": 564,
"text": "Mitchell and Lapata, 2009;",
"ref_id": "BIBREF18"
},
{
"start": 565,
"end": 598,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF11"
},
{
"start": 599,
"end": 623,
"text": "Kartsaklis et al., 2012)",
"ref_id": "BIBREF13"
},
{
"start": 647,
"end": 668,
"text": "(Socher et al., 2012)",
"ref_id": "BIBREF23"
},
{
"start": 696,
"end": 721,
"text": "(Blacoe and Lapata, 2012)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While these developments are promising with regard to the goal of obtaining learnable-yetstructured sentence-level representations of language meaning, part of the motivation for unifying formal and distributional models of semantics has been lost. The compositional aspects of formal semantics are combined with the corpus-based empirical aspects of distributional semantics in such models, yet the logical aspects are not. But it is these logical aspects which are so appealing in formal semantic models, and therefore it would be desirable to replicate the inferential powers of logic within compositional distributional models of semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, I make steps towards addressing this lost connection with logic in compositional distributional semantics. In Section 2, I provide a brief overview of formal and distributional semantic models of meaning. In Section 3, I give mathematical foundations for the rest of the paper by introducing tensors and tensor contraction as a way of modelling multilinear functions. In Section 4, I discuss how predicates, relations, and logical atoms of a quantifier-free predicate calculus can be modelled with tensors. In Section 5, I present tensorial representations of logical operations for a complete propositional calculus. In Section 6, I discuss a variant of the predicate calculus from Section 4 aimed at modelling quantifiers within such tensorbased logics, and the limits of compositional formalisms based only on multilinear maps. I conclude, in Section 7, by suggesting directions for further work based on the contents of this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper does not seek to address the question of how to determine how words should be translated into predicates and relations in the first place, but rather shows how such predicates and relations can be modelled using multilinear algebra. As such, it can be seen as a general theoretical contribution which is independent from the approaches to compositional distributional semantics it can be applied to. It is directly compatible with the efforts of Coecke et al. (2010) and Grefenstette et al. (2013) , discussed below, but is also relevant to any other approach making use of tensors or matrices to encode semantic relations.",
"cite_spans": [
{
"start": 457,
"end": 477,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF5"
},
{
"start": 482,
"end": 508,
"text": "Grefenstette et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Formal semantics, from the Montagovian school of thought (Montague, 1974; Dowty et al., 1981) , treats natural languages as programming languages which compile down to some formal language such as a predicate calculus. The syntax of natural languages, in the form of a grammar, is augmented by semantic interpretations, in the form of expressions from a higher order logic such as the lambda-beta calculus. The parse of a sentence then determines the combinations of lambda-expressions, the reduction of which yields a well-formed formula of a predicate calculus, corresponding to the semantic repre-sentation of the sentence. A simple formal semantic model is illustrated in Figure 1 . Formal semantic models are incredibly powerful, in that the resulting logical representations of sentences can be fed to automated theorem provers to perform textual inference, consistency verification, question answering, and a host of other tasks which are well developed in the literature (e.g. see (Loveland, 1978) and (Fitting, 1996) ). However, the sophistication of such formal semantic models comes at a cost: the complex set of rules allowing for the logical interpretation of text must either be provided a priori, or learned. Learning such representations is a complex task, the difficulty of which is compounded by issues of ambiguity and polysemy which are pervasive in natural languages.",
"cite_spans": [
{
"start": 57,
"end": 73,
"text": "(Montague, 1974;",
"ref_id": "BIBREF20"
},
{
"start": 74,
"end": 93,
"text": "Dowty et al., 1981)",
"ref_id": "BIBREF7"
},
{
"start": 989,
"end": 1005,
"text": "(Loveland, 1978)",
"ref_id": "BIBREF16"
},
{
"start": 1010,
"end": 1025,
"text": "(Fitting, 1996)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 676,
"end": 684,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In contrast, distributional semantic models, best summarised by the dictum of Firth (1957) that \"You shall know a word by the company it keeps,\" provide an elegant and tractable way of learning semantic representations of words from text. Word meanings are modelled as high-dimensional vectors in large semantic vector spaces, the basis elements of which correspond to contextual features such as other words from a lexicon. Semantic vectors for words are built by counting how many time a target word occurs within a context (e.g. within k words of select words from the lexicon). These context counts are then normalised by a term frequencyinverse document frequency-like measure (e.g. TF-IDF, pointwise mutual information, ratio of probabilities), and are set as the basis weights of the vector representation of the word's meaning. Word vectors can then be compared using geometric distance metrics such as cosine similarity, allowing us to determine the similarity of words, cluster semantically related words, and so on. Excellent overviews of distributional semantic models are provided by Curran (2004) and Mitchell (2011) . A simple distributional semantic model showing the spacial representation of words 'dog', 'cat' and 'snake' within the context of feature words 'pet', 'furry', and 'stroke' is shown in Figure 2 .",
"cite_spans": [
{
"start": 78,
"end": 90,
"text": "Firth (1957)",
"ref_id": "BIBREF8"
},
{
"start": 1097,
"end": 1110,
"text": "Curran (2004)",
"ref_id": "BIBREF6"
},
{
"start": 1115,
"end": 1130,
"text": "Mitchell (2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1318,
"end": 1326,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Distributional semantic models have been successfully applied to tasks such as word-sense discrimination (Sch\u00fctze, 1998) , thesaurus extraction (Grefenstette, 1994) , and automated essay marking (Landauer and Dumais, 1997) . However, while such models provide tractable ways of learning and comparing word meanings, they do not naturally scale beyond word length. As recently pointed out by Turney (2012) , treating larger segments of texts as lexical units and learning their representations distributionally (the 'holistic approach') violates the principle of linguistic creativity, according to which we can formulate and understand phrases which we've never observed before, provided we know the meaning of their parts and how they are combined. As such, distributional semantics makes no effort to account for the compositional nature of language like formal semantics does, and ignores issues relating to syntactic and relational aspects of language.",
"cite_spans": [
{
"start": 105,
"end": 120,
"text": "(Sch\u00fctze, 1998)",
"ref_id": "BIBREF21"
},
{
"start": 144,
"end": 164,
"text": "(Grefenstette, 1994)",
"ref_id": "BIBREF12"
},
{
"start": 195,
"end": 222,
"text": "(Landauer and Dumais, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 391,
"end": 404,
"text": "Turney (2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Several proposals have been put forth over the last few years to provide vector composition functions for distributional models in order to introduce compositionality, thereby replicating some of the as-pects of formal semantics while preserving learnability. Simple operations such as vector addition and multiplication, with or without scalar or matrix weights (to take word order or basic relational aspects into account), have been suggested (Zanzotto et al., 2010; Mitchell and Lapata, 2008; Mitchell and Lapata, 2009) . Smolensky (1990) suggests using the tensor product of word vectors to produce representations that grow with sentence complexity. Clark and Pulman (2006) extend this approach by including basis vectors standing for dependency relations into tensor product-based representations. Both of these tensor product-based approaches run into dimensionality problems as representations of sentence meaning for sentences of different lengths or grammatical structure do not live in the same space, and thus cannot directly be compared. Coecke et al. 2010develop a framework using category theory, solving this dimensionality problem of tensor-based models by projecting tensored vectors for sentences into a unique vector space for sentences, using functions dynamically generated by the syntactic structure of the sentences. In presenting their framework, which partly inspired this paper, they describe how a verb can be treated as a logical relation using tensors in order to evaluate the truth value of a simple sentence, as well as how negation can be modelled using matrices.",
"cite_spans": [
{
"start": 446,
"end": 469,
"text": "(Zanzotto et al., 2010;",
"ref_id": "BIBREF25"
},
{
"start": 470,
"end": 496,
"text": "Mitchell and Lapata, 2008;",
"ref_id": "BIBREF17"
},
{
"start": 497,
"end": 523,
"text": "Mitchell and Lapata, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 526,
"end": 542,
"text": "Smolensky (1990)",
"ref_id": "BIBREF22"
},
{
"start": 656,
"end": 679,
"text": "Clark and Pulman (2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "A related approach, by Baroni and Zamparelli (2010) , represents unary relations such as adjectives as matrices learned by linear regression from corpus data, and models adjective-noun composition as matrix-vector multiplication. Grefenstette et al. (2013) generalise this approach to relations of any arity and relate it to the framework of Coecke et al. (2010) using a tensor-based approach to formal semantic modelling similar to that presented in this paper.",
"cite_spans": [
{
"start": 23,
"end": 51,
"text": "Baroni and Zamparelli (2010)",
"ref_id": "BIBREF0"
},
{
"start": 230,
"end": 256,
"text": "Grefenstette et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Finally, Socher et al. (2012) apply deep learning techniques to model syntax-sensitive vector composition using non-linear operations, effectively turning parse trees into multi-stage neural networks. Socher shows that the non-linear activation function used in such a neural network can be tailored to replicate the behaviour of basic logical connectives such as conjunction and negation.",
"cite_spans": [
{
"start": 9,
"end": 29,
"text": "Socher et al. (2012)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Tensors are the mathematical objects dealt with in multilinear algebra just as vectors and matrices are the objects dealt with in linear algebra. In fact, tensors can be seen as generalisations of vectors and matrices by introducing the notion of tensor rank. Let the rank of a tensor be the number of indices required to describe a vector/matrix-like object in sum notation. A vector v in a space V with basis {b V i } i can be written as the weighted sum of the basis vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "v = i c v i b V i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "where the c v i elements are the scalar basis weights of the vector. Being fully described with one index, vectors are rank 1 tensors. Similarly, a matrix M is an element of a space",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "V \u2297 W with basis {(b V i , b W j )} i j (such pairs of basis vectors of V and W are com- monly written as {b V i \u2297 b W j } i j in multilinear algebra)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": ". Such matrices are rank 2 tensors, as they can be fully described using two indices (one for rows, one for columns):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "M = i j c M i j b V i \u2297 b W j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "where the scalar weights c M i j are just the i jth elements of the matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "A tensor T of rank k is just a geometric object with a higher rank. Let T be a member of V 1 \u2297. . .\u2297V k ; we can express T as follows, using k indices \u03b1 1 . . . \u03b1 k :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "T = \u03b1 1 ...\u03b1 k c T \u03b1 1 ...\u03b1 k b V 1 \u03b1 1 \u2297 . . . \u2297 b V k \u03b1 k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "In this paper, we will be dealing with tensors of rank 1 (vectors), rank 2 (matrices) and rank 3, which can be pictured as cuboids (or a matrix of matrices). Tensor contraction is an operation which allows us to take two tensors and produce a third. It is a generalisation of inner products and matrix multiplication to tensors of higher ranks. Let T be a tensor in V 1 \u2297. . .\u2297V j \u2297V k and U be a tensor in V k \u2297V m \u2297. . .\u2297V n . The contraction of these tensors, written T \u00d7 U, corresponds to the following calculation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "T \u00d7 U = \u03b11...\u03b1n c T \u03b11...\u03b1k c U \u03b1k...\u03b1n b V1 \u03b11 \u2297 . . . \u2297 b V j \u03b1 j \u2297 b Vm \u03b1m \u2297 . . . \u2297 b Vn \u03b1n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Tensor contraction takes a tensor of rank k and a tensor of rank n \u2212 k + 1 and produces a tensor of rank n \u2212 1, corresponding to the sum of the ranks of the input tensors minus 2. The tensors must satisfy the following restriction: the left tensor must have a rightmost index spanning the same number of dimensions as the leftmost index of the right tensor. This is similar to the restriction that a m by n matrix can only be multiplied with a p by q matrix if n = p, i.e. if the index spanning the columns of the first matrix covers the same number of columns as the index spanning the rows of the second matrix covers rows. Similarly to how the columns of one matrix 'merge' with the rows of another to produce a third matrix, the part of the first tensor spanned by the index k merges with the part of the second tensor spanned by k by 'summing through' the shared basis elements b V k \u03b1 k of each tensor. Each tensor therefore loses a rank while being joined, explaining how the tensor produced by T\u00d7U is of rank k+(n\u2212k+1)\u22122 = n\u22121.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "There exists an isomorphism between tensors and multilinear maps (Bourbaki, 1989; Lee, 1997) , such that any curried multilinear map",
"cite_spans": [
{
"start": 65,
"end": 81,
"text": "(Bourbaki, 1989;",
"ref_id": "BIBREF3"
},
{
"start": 82,
"end": 92,
"text": "Lee, 1997)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "f : V 1 \u2192 . . . \u2192 V j \u2192 V k can be represented as a tensor T f \u2208 V k \u2297 V j \u2297 . . . \u2297 V 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "(note the reversed order of the vector spaces), with tensor contraction acting as function application. This isomorphism guarantees that there exists such a tensor T f for every f , such that the following equality holds for any",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "v 1 \u2208 V 1 , . . . , v j \u2208 V j : f v 1 . . . v j = v k = T f \u00d7 v 1 \u00d7 . . . \u00d7 v j 4 Tensor-based predicate calculi",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "In this section, I discuss how the isomorphism between multilinear maps and tensors described above can be used to model predicates, relations, and logical atoms of a predicate calculus. The four aspects of a predicate calculus we must replicate here using tensors are as follows: truth values, the logical domain and its elements (logical atoms), predicates, and relations. I will discuss logical connectives in the next section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Both truth values and domain objects are the basic elements of a predicate calculus, and therefore it makes sense to model them as vectors rather than higher rank tensors, which I will reserve for relations. We first must consider the vector space used to model the boolean truth values of B. Coecke et al. (2010) suggest, as boolean vector space, the space B with the basis { , \u22a5}, where = [1 0] is interpreted as 'true', and \u22a5 = [0 1] as 'false'.",
"cite_spans": [
{
"start": 293,
"end": 313,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "I assign to the domain D, the set of objects in our logic, a vector space D on R |D| with basis vectors {d i } i which are in bijective correspondence with elements of D. An element of D is therefore represented as a one-hot vector in D, the single nonnull value of which is the weight for the basis vector mapped to that element of D. Similarly, a subset of D is a vector of D where those elements of D in the subset have 1 as their corresponding basis weights in the vector, and those not in the subset have 0. Therefore there is a one-to-one correspondence between the vectors in D and the elements of the power set P(D), provided the basis weights of the vectors are restricted to one of 0 or 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Each unary predicate P in the logic is represented in the logical model as a set M P \u2286 D containing the elements of the domain for which the predicate is true. Predicates can be viewed as a unary function f P : D \u2192 B where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "f P (x) = if x \u2208 M P \u22a5 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "These predicate functions can be modelled as rank 2 tensors in B \u2297 D, i.e. matrices. Such a matrix M P is expressed in sum notation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "M P = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed i c M P 1i \u2297 d i \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 + \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed i c M P 2i \u22a5 \u2297 d i \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "The basis weights are defined in terms of the set M P as follows: c M P 1i = 1 if the logical atom x i associated with basis weight d i is in M P , and 0 otherwise; conversely, c M P 2i = 1 if the logical atom x i associated with basis weight d i is not in M P , and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "To give a simple example, let's consider a domain with three individuals, represented as the following one-hot vectors in D: john = [1 0 0] , chris = [0 1 0] , and tom = [0 0 1] . Let's imagine that Chris and John are mathematicians, but Tom is not. The predicate P for 'is a mathematician' therefore is represented model-theoretically as the set M P = {chris, john}. Translating this into a matrix gives the following tensor for P:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "M P = 1 1 0 0 0 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "To compute the truth value of 'John is a mathematician', we perform predicate-argument application as tensor contraction (matrix-vector multiplication, in this case):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "M P \u00d7 john = 1 1 0 0 0 1 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 1 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = 1 0 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Likewise for 'Tom is a mathematician':",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "M P \u00d7 tom = 1 1 0 0 0 1 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = 0 1 = \u22a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Model theory for predicate calculus represents any n-ary relation R, such as a verb, as the set M R of n-tuples of elements from D for which R holds. Therefore such relations can be viewed as functions f R : D n \u2192 B where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "f R (x 1 , . . . , x n ) = if (x 1 , . . . , x n ) \u2208 M R \u22a5 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "We can represent the boolean function for such a relation R as a tensor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "T R in B \u2297 D \u2297 . . . \u2297 D n : T R = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u03b1 1 ...\u03b1 n c T R 1\u03b1 1 ...\u03b1 n \u2297 d \u03b1 1 \u2297 . . . \u2297 d \u03b1 n \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 + \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u03b1 1 ...\u03b1 n c T R 2\u03b1 1 ...\u03b1 n \u22a5 \u2297 d \u03b1 1 \u2297 . . . \u2297 d \u03b1 n \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "As was the case for predicates, the weights for relational tensors are defined in terms of the set modelling the relation: c T R 1\u03b1 1 ...\u03b1 n is 1 if the tuple (x, . . . , z) associated with the basis vectors d \u03b1 n . . . d \u03b1 1 (again, note the reverse order) is in M R and 0 otherwise; and c T R 2\u03b1 1 ...\u03b1 n is 1 if the tuple (x, . . . , z) associated with the basis vectors d \u03b1 n . . . d \u03b1 1 is not in M R and 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "To give an example involving relations, let our domain be the individuals John ( j) and Mary (m). Mary loves John and herself, but John only loves himself. The logical model for this scenario is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "D = { j, m} M loves = {( j, j), (m, m), (m, j)}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Distributionally speaking, the elements of the domain will be mapped to the following one-hot vectors in some two-dimensional space D as follows: j = [1 0] and m = [0 1] . The tensor for 'loves' can be written as follows, ignoring basis elements with null-valued basis weights, and using the distributivity of the tensor product over addition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "T loves = \u2297 ((d 1 \u2297 d 1 ) + (d 2 \u2297 d 2 ) + (d 1 \u2297 d 2 )) + (\u22a5 \u2297 d 2 \u2297 d 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "Computing \"Mary loves John\" would correspond to the following calculation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "(T loves \u00d7 m) \u00d7 j = (( \u2297 d 2 ) + ( \u2297 d 1 )) \u00d7 j =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "whereas \"John loves Mary\" would correspond to the following calculation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "(T loves \u00d7 j) \u00d7 m = (( \u2297 d 1 ) + (\u22a5 \u2297 d 2 )) \u00d7 m = \u22a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tensors and multilinear maps",
"sec_num": "3"
},
{
"text": "In this section, I discuss how the boolean connectives of a propositional calculus can be modelled using tensors. Combined with the predicate and relation representations discussed above, these form a complete quantifier-free predicate calculus based on tensors and tensor contraction. Negation has already been shown to be modelled in the boolean space described earlier by Coecke et al. (2010) as the swap matrix:",
"cite_spans": [
{
"start": 375,
"end": 395,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "T \u00ac = 0 1 1 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "This can easily be verified:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "T \u00ac \u00d7 = 0 1 1 0 1 0 = 0 1 = \u22a5 T \u00ac \u00d7 \u22a5 = 0 1 1 0 0 1 = 1 0 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "All other logical operators are binary, and hence modelled as rank 3 tensors. To make talking about rank 3 tensors used to model binary operations easier, I will use the following block matrix notation for 2 \u00d7 2 \u00d7 2 rank 3 tensors T:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "T = a 1 b 1 a 2 b 2 c 1 d 1 c 2 d 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "which allows us to express tensor contractions as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "T \u00d7 v = a 1 b 1 a 2 b 2 c 1 d 1 c 2 d 2 \u03b1 \u03b2 = \u03b1 \u2022 a 1 + \u03b2 \u2022 a 2 \u03b1 \u2022 b 1 + \u03b2 \u2022 b 2 \u03b1 \u2022 c 1 + \u03b2 \u2022 c 2 \u03b1 \u2022 d 1 + \u03b2 \u2022 d 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "or more concretely:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "T \u00d7 = a 1 b 1 a 2 b 2 c 1 d 1 c 2 d 2 1 0 = a 1 b 1 c 1 d 1 T \u00d7 \u22a5 = a 1 b 1 a 2 b 2 c 1 d 1 c 2 d 2 0 1 = a 2 b 2 c 2 d 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "Using this notation, we can define tensors for the following operations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "(\u2228) \u2192 T \u2228 = 1 1 1 0 0 0 0 1 (\u2227) \u2192 T \u2227 = 1 0 0 0 0 1 1 1 (\u2192) \u2192 T \u2192 = 1 0 1 1 0 1 0 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "I leave the trivial proof by exhaustion that these fit the bill to the reader. It is worth noting here that these tensors preserve normalised probabilities of truth. Let us consider a model such at that described in Coecke et al. (2010) which, in lieu of boolean truth values, represents truth value vectors of the form [\u03b1 \u03b2] where \u03b1 + \u03b2 = 1. Applying the above logical operations to such vectors produces vectors with the same normalisation property. This is due to the fact that the columns of the component matrices are all normalised (i.e. each column sums to 1). To give an example with conjunction, let v = [\u03b1 1 \u03b2 1 ] and w = [\u03b1 2 \u03b2 2 ] with \u03b1 1 + \u03b2 1 = \u03b1 2 + \u03b2 2 = 1. The conjunction of these vectors is calculated as follows:",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "Coecke et al. (2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "(T \u2227 \u00d7 v) \u00d7 w = 1 0 0 0 0 1 1 1 \u03b1 1 \u03b2 1 \u03b1 2 \u03b2 2 = \u03b1 1 0 \u03b2 1 \u03b1 1 + \u03b2 1 \u03b1 2 \u03b2 2 = \u03b1 1 \u03b1 2 \u03b2 1 \u03b1 2 + (\u03b1 1 + \u03b2 1 )\u03b2 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "To check that the probabilities are normalised we calculate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "\u03b1 1 \u03b1 2 + \u03b2 1 \u03b1 2 + (\u03b1 1 + \u03b2 1 )\u03b2 2 = (\u03b1 1 + \u03b2 1 )\u03b1 2 + (\u03b1 1 + \u03b2 1 )\u03b2 2 = (\u03b1 1 + \u03b2 1 )(\u03b1 2 + \u03b2 2 ) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "We can observe that the resulting probability distribution for truth is still normalised. The same property can be verified for the other connectives, which I leave as an exercise for the reader.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical connectives with tensors",
"sec_num": "5"
},
{
"text": "The predicate calculus described up until this point has repeatedly been qualified as 'quantifier-free', for the simple reason that quantification cannot be modelled if each application of a predicate or relation immediately yields a truth value. In performing such reductions, we throw away the information required for quantification, namely the information which indicates which elements of a domain the predicate holds true or false for. In this section, I present a variant of the predicate calculus developed earlier in this paper which allows us to model simple quantification (i.e. excluding embedded quantifiers) alongside a tensor-based approach to predicates. However, I will prove that this approach to quantifier modelling relies on non-linear functions, rendering them non-suitable for compositional distributional models relying solely on multilinear maps for composition (or alternatively, rendering such models unsuitable for the modelling of quantifiers by this method).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "We saw, in Section 4, that vectors in the semantic space D standing for the logical domain could model logical atoms as well as sets of atoms. With this in mind, instead of modelling a predicate P as a truth-function, let us now view it as standing for some function f P : P(D) \u2192 P(D), defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "f P (X) = X \u2229 M P",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "where X is a set of domain objects, and M P is the set modelling the predicate. The tensor form of such a function will be some T f P in D \u2297 D. Let this square matrix be a diagonal matrix such that basis weights c T fp ii = 1 if the atom x corresponding to d i is in M P and 0 otherwise. Through tensor contraction, this tensor maps subsets of D (elements of D) to subsets of D containing only those objects of the original subset for which P holds (i.e. yielding another vector in D).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "To give an example: let us consider a domain with two dogs (a and b) and a cat (c). One of the dogs (b) is brown, as is the cat. Let S be the set of dogs, and P the predicate \"brown\". I represent these statements in the model as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "D = {a, b, c} S = {a, b} M P = {b, c}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "The set of dogs is represented as a vector S = [1 1 0] and the predicate 'brown' as a tensor in D \u2297 D:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "T P = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 0 0 1 0 0 0 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "The set of brown dogs is obtained by computing f B (S ), which distributionally corresponds to applying the tensor T P to the vector representation of S via tensor contraction, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "T P \u00d7 S = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 0 0 0 1 0 0 0 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 1 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 0 1 0 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb = b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "The result of this computation shows that the set of brown dogs is the singleton set containing the only brown dog, b. As for how logical connectives fit into this picture, in both approaches discussed below, conjunction and disjunction are modelled using set-theoretic intersection and union, which are simply the component-wise min and max functions over vectors, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "Using this new way of modelling predicates as tensors, I turn to the problem of modelling quantification. We begin by putting all predicates in vector form by replacing each instance of the bound variable with a vector 1 filled with ones, which extracts the diagonal from the predicate matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "An intuitive way of modelling universal quantification is as follows: expressions of the form \"All Xs are Ys\" are true if and only if M X = M X \u2229 M Y , where M X and M Y are the set of Xs and the set of Ys, respectively. Using this, we can define the map forall for distributional universal quantification modelling expressions of the form \"All Xs are Ys\" as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "forall(X, Y) = if X = min(X, Y) \u22a5 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "To give a short example, the sentence 'All Greeks are human' is verified by computing X = (M greek \u00d7 1), Y = (M human \u00d7 1), and verifying the equality X = min(X, Y). Existential statements of the form \"There exists X\" can be modelled using the function exists, which tests whether or not M X is empty, and is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "exists(X) = if |X| > 0 \u22a5 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "To give a short example, the sentence 'there exists a brown dog' is verified by computing X = (M brown \u00d7 1) \u2229 (M dog \u00d7 1) and verifying whether or not X is of strictly positive length. An important point to note here is that neither of these quantification functions are multi-linear maps, since a multilinear map must be linear in all arguments. A counter example for forall is to consider the case where M X and M Y are empty, and multiply their vector representations by non-zero scalar weights \u03b1 and \u03b2. The proof that exists is not a multilinear map is equally trivial. Assume M X is an empty set and \u03b1 is a non-zero scalar weight:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "\u03b1X = X exists(\u03b1X) = exists(X) = \u22a5 exists(\u03b1X) \u03b1\u22a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "It follows that exists is not a multi-linear function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers and non-linearity",
"sec_num": "6"
},
{
"text": "In this paper, I set out to demonstrate that it was possible to replicate most aspects of predicate logic using tensor-based models. I showed that tensors can be constructed from logical models to represent predicates and relations, with vectors encoding elements or sets of elements from the logical domain. I discussed how tensor contraction allows for evaluation of logical expressions encoded as tensors, and that logical connectives can be defined as tensors to form a full quantifier-free predicate calculus. I exposed some of the limitations of this approach when dealing with variables under the scope of quantifiers, and proposed a variant for the tensor representation of predicates which allows us to deal with quantification. Further work on tensor-based modelling of quantifiers should ideally seek to reconcile this work with that of Barwise and Cooper (1981) . In this section, I discuss how both of these approaches to predicate modelling can be put into relation, and suggest further work that might be done on this topic, and on the topic of integrating this work into compositional distributional models of semantics.",
"cite_spans": [
{
"start": 848,
"end": 873,
"text": "Barwise and Cooper (1981)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "The first approach to predicate modelling treats predicates as truth functions represented as tensors, while the second treats them as functions from subsets of the domain to subsets of the domain. Yet both representations of predicates contain the same information. Let M P and M P be the tensor representations of a predicate P under the first and second approach, respectively. The relation between these representations lies in the equality diag(pM P ) = M P , where p is the covector [1 0] (and hence pM P yields the first row of M P ). The second row of M P being defined in terms of the first, one can also recover M P from the diagonal of M P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "Furthermore, both approaches deal with separate aspects of predicate logic, namely applying predicates to logical atoms, and applying them to bound variables. With this in mind, it is possible to see how both approaches can be used sequentially by noting that tensor contraction allows for partial application of relations to logical atoms. For example, applying a binary relation to its first argument under the first tensor-based model yields a predicate. Translating this predicate into the second model's form using the equality defined above then permits us to use it in quantified expressions. Using this, we can evaluate expressions of the form \"There exists someone who John loves\". Future work in this area should therefore focus on developing a version of this tensor calculus which permits seamless transition between both tensor formulations of logical predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
},
{
"text": "Finally, this paper aims to provide a starting point for the integration of logical aspects into composi-tional distributional semantic models. The work presented here serves to illustrate how tensors can simulate logical elements and operations, but does not address (or seek to address) the fact that the vectors and matrices in most compositional distributional semantic models do not cleanly represent elements of a logical domain. However, such distributional representations can arguably be seen as representing the properties objects of a logical domain hold in a corpus: for example the similar distributions of 'car' and 'automobile' could serve to indicate that these concepts are co-extensive. This suggests two directions research based on this paper could take. One could use the hypothesis that similar vectors indicate co-extensive concepts to infer a (probabilistic) logical domain and set of predicates, and use the methods described above without modification; alternatively one could use the form of the logical operations and predicate tensors described in this paper as a basis for a higher-dimensional predicate calculus, and investigate how such higher-dimensional 'logical' operations and elements could be defined or learned. Either way, the problem of reconciling the fuzzy 'messiness' of distributional models with the sharp 'cleanliness' of logic is a difficult problem, but I hope to have demonstrated in this paper that a small step has been made in the right direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "Thanks to Ond\u0159ej Ryp\u00e1\u010dek, Nal Kalchbrenner and Karl Moritz Hermann for their helpful comments during discussions surrounding this paper. This work is supported by EPSRC Project EP/I03808X/1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "M",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Baroni and R. Zamparelli. Nouns are vectors, adjec- tives are matrices: Representing adjective-noun con- structions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193. Association for Computational Linguistics, 2010.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Generalized quantifiers and natural language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Barwise",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Cooper",
"suffix": ""
}
],
"year": 1981,
"venue": "Linguistics and philosophy",
"volume": "",
"issue": "",
"pages": "159--219",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Barwise and R. Cooper Generalized quantifiers and natural language. Linguistics and philosophy, pages 159-219. Springer, 1981.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A comparison of vector-based representations for semantic composition",
"authors": [
{
"first": "W",
"middle": [],
"last": "Blacoe",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Blacoe and M. Lapata. A comparison of vector-based representations for semantic composition. Proceed- ings of the 2012 Conference on Empirical Methods in Natural Language Processing, 2012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Commutative Algebra: Chapters 1-7",
"authors": [
{
"first": "N",
"middle": [],
"last": "Bourbaki",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Bourbaki. Commutative Algebra: Chapters 1-7. Springer-Verlag (Berlin and New York), 1989.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining symbolic and distributional models of meaning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2006,
"venue": "AAAI Spring Symposium on Quantum Interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Clark and S. Pulman. Combining symbolic and distri- butional models of meaning. In AAAI Spring Sympo- sium on Quantum Interaction, 2006.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mathematical Foundations for a Compositional Distributional Model of Meaning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2010,
"venue": "Linguistic Analysis",
"volume": "36",
"issue": "",
"pages": "345--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Coecke, M. Sadrzadeh, and S. Clark. Mathematical Foundations for a Compositional Distributional Model of Meaning. Linguistic Analysis, volume 36, pages 345-384. March 2010.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From distributional to semantic similarity",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Curran. From distributional to semantic similarity. PhD thesis, 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Introduction to Montague Semantics",
"authors": [
{
"first": "D",
"middle": [
"R"
],
"last": "Dowty",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Wall",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. R. Dowty, R. E. Wall, and S. Peters. Introduction to Montague Semantics. Dordrecht, 1981.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A synopsis of linguistic theory 1930-1955. Studies in linguistic analysis",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Firth. A synopsis of linguistic theory 1930-1955. Studies in linguistic analysis, 1957.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "First-order logic and automated theorem proving",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fitting",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Fitting. First-order logic and automated theorem proving. Springer Verlag, 1996.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-step regression learning for compositional distributional semantics",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Tenth International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grefenstette, G. Dinu, Y. Zhang, M. Sadrzadeh, and M. Baroni. Multi-step regression learning for com- positional distributional semantics. In Proceedings of the Tenth International Conference on Computational Semantics. Association for Computational Linguistics, 2013.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Grefenstette and M. Sadrzadeh. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, 2011.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Explorations in automatic thesaurus discovery",
"authors": [
{
"first": "G",
"middle": [],
"last": "Grefenstette",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Grefenstette. Explorations in automatic thesaurus dis- covery. 1994.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Unified Sentence Space for Categorical Distributional-Compositional Semantics: Theory and Experiments",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of 24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Kartsaklis, and M. Sadrzadeh and S. Pulman. A Unified Sentence Space for Categorical Distributional- Compositional Semantics: Theory and Experiments. In Proceedings of 24th International Conference on Computational Linguistics (COLING 2012): Posters, 2012.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Landauer and S. T. Dumais. A solution to Plato's problem: The latent semantic analysis theory of ac- quisition, induction, and representation of knowledge. Psychological review, 1997.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Riemannian manifolds: An introduction to curvature",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "176",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lee. Riemannian manifolds: An introduction to curva- ture, volume 176. Springer Verlag, 1997.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automated theorem proving: A logical basis",
"authors": [
{
"first": "D",
"middle": [
"W"
],
"last": "Loveland",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. W. Loveland. Automated theorem proving: A logical basis. Elsevier North-Holland, 1978.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mitchell and M. Lapata. Vector-based models of se- mantic composition. In Proceedings of ACL, vol- ume 8, 2008.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language models based on semantic composition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "430--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mitchell and M. Lapata. Language models based on se- mantic composition. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 430-439. As- sociation for Computational Linguistics, 2009.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Composition in distributional models of semantics",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. J. Mitchell. Composition in distributional models of semantics. PhD thesis, 2011.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "English as a Formal Language. Formal Semantics: The Essential Readings",
"authors": [
{
"first": "R",
"middle": [],
"last": "Montague",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Montague. English as a Formal Language. Formal Semantics: The Essential Readings, 1974.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Sch\u00fctze. Automatic word sense discrimination. Com- putational linguistics, 24(1):97-123, 1998.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Tensor product variable binding and the representation of symbolic structures in connectionist systems",
"authors": [
{
"first": "P",
"middle": [],
"last": "Smolensky",
"suffix": ""
}
],
"year": 1990,
"venue": "Artificial intelligence",
"volume": "46",
"issue": "1-2",
"pages": "159--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Smolensky. Tensor product variable binding and the representation of symbolic structures in connection- ist systems. Artificial intelligence, 46(1-2):159-216, 1990.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semantic compositionality through recursive matrixvector spaces",
"authors": [
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Huval",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1201--1211",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Socher, B. Huval, C.D. Manning, and A.Y Ng. Semantic compositionality through recursive matrix- vector spaces. Proceedings of the 2012 Conference on Empirical Methods in Natural Language Processing, pages 1201-1211, 2012.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Domain and function: A dual-space model of semantic relations and compositions",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Artificial Intelligence Research",
"volume": "44",
"issue": "",
"pages": "533--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. D. Turney. Domain and function: A dual-space model of semantic relations and compositions. Journal of Ar- tificial Intelligence Research, 44:533-585, 2012.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Estimating linear models for compositional distributional semantics",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zanzotto",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Korkontzelos",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Fallucchi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1263--1271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Zanzotto, I. Korkontzelos, F. Fallucchi, and S. Man- andhar. Estimating linear models for compositional distributional semantics. In Proceedings of the 23rd International Conference on Computational Linguis- tics, pages 1263-1271. Association for Computational Linguistics, 2010.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "A simple formal semantic model.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "A simple distributional semantic model.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "\u03b1X = X \u03b2Y = Y forall(\u03b1X, \u03b2Y) = forall(X, Y) = forall(\u03b1X, \u03b2Y) \u03b1\u03b2 I observe that the equations above demonstrate that forall is not a multilinear map.",
"uris": null
}
}
}
}