Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:50:38.281249Z"
},
"title": "Neural Tensor Networks with Diagonal Slice Matrices",
"authors": [
{
"first": "Takahiro",
"middle": [],
"last": "Ishihara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Osaka University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Hitoshi",
"middle": [],
"last": "Manabe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Masahi",
"middle": [],
"last": "Shimbo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nara Institute of Science and Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": "",
"affiliation": {
"laboratory": "NTT Communication Science Laboratories",
"institution": "",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.",
"pdf_parse": {
"paper_id": "N18-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "Although neural tensor networks (NTNs) have been successful in many natural language processing tasks, they require a large number of parameters to be estimated, which often results in overfitting and long training times. We address these issues by applying eigendecomposition to each slice matrix of a tensor to reduce the number of parameters. We evaluate our proposed NTN models in two tasks. First, the proposed models are evaluated in a knowledge graph completion task. Second, a recursive NTN (RNTN) extension of the proposed models is evaluated on a logical reasoning task. The experimental results show that our proposed models learn better and faster than the original (R)NTNs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Alongside the nonlinear activation functions, linear mapping by matrix multiplication is an essential component of neural network (NN) models, as it determines the feature interaction and thus the expressiveness of models. In addition to the matrix-based mapping, neural tensor networks (NTNs) (Socher et al., 2013a ) employ a 3dimensional tensor to capture direct interactions among input features. Due to the large expressive capacity of 3D tensors, NTNs have been successful in an array of natural language processing (NLP) and machine learning tasks, including knowledge graph completion (KGC) (Socher et al., 2013a) , sentiment analysis (Socher et al., 2013b) , and reasoning with logical semantics (Bowman et al., 2015) . However, since a 3D tensor has a large number of parameters, NTNs need longer time to train than other NN models. Moreover, the millions of parameters often make the model suffer from overfitting (Yang et al., 2015) .",
"cite_spans": [
{
"start": 294,
"end": 315,
"text": "(Socher et al., 2013a",
"ref_id": "BIBREF15"
},
{
"start": 598,
"end": 620,
"text": "(Socher et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 642,
"end": 664,
"text": "(Socher et al., 2013b)",
"ref_id": "BIBREF16"
},
{
"start": 704,
"end": 725,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 924,
"end": 943,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To solve these problems, we propose two new parameter reduction techniques for NTNs. These techniques drastically decrease the number of parameters in an NTN without diminishing its expressiveness. We use the matrix decomposition techniques that are utilized for KGC in Yang et al. (2015) and Trouillon et al. (2016) . Yang et al. (2015) imposed a constraint that a matrix in the bilinear term in their model had to be diagonal. As mentioned in a subsequent section, this is essentially equal to assuming that the matrix be symmetric and performing eigendecomposition. Trouillon et al. (2016) also applied eigendecomposition to a matrix by regarding it as the real part of a normal matrix. Following these studies, we perform simultaneous diagonalization on all slice matrices of a NTN tensor. As a result, mapping by a 3D (n \u00d7 n \u00d7 k) tensor is replaced with an array of k \"triple inner products\" of two input vectors and a weight vector. Thus, we obtain two new NTN models where the number of parameters is reduced from O(n 2 k) to O(nk).",
"cite_spans": [
{
"start": 270,
"end": 288,
"text": "Yang et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 293,
"end": 316,
"text": "Trouillon et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 319,
"end": 337,
"text": "Yang et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 569,
"end": 592,
"text": "Trouillon et al. (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On a KGC task, these parameter-reduced NTNs (NTN-Diag and NTN-Comp) alleviate overfitting and outperform the original NTN. Moreover, our proposed NTNs can learn faster than the original NTN. We also show that our proposed models perform better and learn faster in a recursive setting by examining a logical reasoning task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We consider mapping in a neural network (NN) layer that takes two vectors as input, such as recursive neural networks.",
"cite_spans": [
{
"start": 40,
"end": 44,
"text": "(NN)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Recurrent neural networks also has this structure, with one input vector being the hidden state from the previous time step. As a mapping before activation in the NN layer, linear mapping (matrix multiplication) is commonly used:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "W 1 x 1 + W 2 x 2 = [W 1 , W 2 ] [ x 1 x 2 ] = W x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Here, since x 1 , x 2 \u2208 R n , W 1 , W 2 \u2208 R k\u00d7n , this linear mapping is a transformation from R 2n to R k . Linear mapping, which is a standard component of NNs, has been applied successfully in many tasks. However, it cannot consider the interaction between different components of two input vectors, which renders it not ideal for modeling complex compositional structures such as trees and graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To alleviate this problem, some models such as NTNs (Socher et al., 2013a) have explored 3D tensors to yield more expressive mapping:",
"cite_spans": [
{
"start": 52,
"end": 74,
"text": "(Socher et al., 2013a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "x T 1 W [1:k] x 2 = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 x T 1 W [1] x 2 x T 1 W [2] x 2 . . . x T 1 W [k] x 2 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 sum ( W [1] \u2299 (x 1 \u2297 x 2 ) ) sum ( W [2] \u2299 (x 1 \u2297 x 2 ) ) . . . sum ( W [k] \u2299 (x 1 \u2297 x 2 ) ) \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fb where W [1:k] \u2208 R n\u00d7n\u00d7k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "The output of this mapping is an array of k bilinear products in the form of x T 1 W [i] x 2 . Thus, this is also a transformation from R 2n to R k . Each element of the output of this mapping equals the sum of W [i] \u2299 (x 1 \u2297 x 2 ), where \u2299 and \u2297 represent, respectively, the Hadamard and the outer products. Hence this mapping captures the direct interaction between different components (or \"features\") in two input vectors. Thanks to this expressiveness, NTNs are effective in tasks such as knowledge graph completion (Socher et al., 2013a) , sentiment analysis (Socher et al., 2013b) , and logical reasoning (Bowman et al., 2015) .",
"cite_spans": [
{
"start": 85,
"end": 88,
"text": "[i]",
"ref_id": null
},
{
"start": 213,
"end": 216,
"text": "[i]",
"ref_id": null
},
{
"start": 521,
"end": 543,
"text": "(Socher et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 565,
"end": 587,
"text": "(Socher et al., 2013b)",
"ref_id": "BIBREF16"
},
{
"start": 612,
"end": 633,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Although mapping by a 3D tensor provides expressiveness, it has a large number (O(n 2 k)) of parameters. Due to this, NTNs often suffer from overfitting and long training times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "To reduce the number of parameters of a slice matrix W [i] \u2208 R n\u00d7n in a tensor, simple matrix decomposition (SMD) is commonly used (Bai et al., 2009) . SMD factorizes W [i] into a product of two low rank matrices S [i] \u2208 R n\u00d7m and T [i] \u2208 R m\u00d7n (m \u226a n):",
"cite_spans": [
{
"start": 55,
"end": 58,
"text": "[i]",
"ref_id": null
},
{
"start": 131,
"end": 149,
"text": "(Bai et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 169,
"end": 172,
"text": "[i]",
"ref_id": null
},
{
"start": 215,
"end": 218,
"text": "[i]",
"ref_id": null
},
{
"start": 233,
"end": 236,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Matrix Decomposition (SMD)",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W [i] \u2243 S [i] T [i] .",
"eq_num": "(1)"
}
],
"section": "Simple Matrix Decomposition (SMD)",
"sec_num": "3.1"
},
{
"text": "By plugging (1) into bilinear term x T 1 W [i] x 2 , we obtain the approximation x T 1 S [i] T [i] x 2 . SMD reduces the number of parameters of W [i] from n 2 to 2nm. However, the dimension m for S and T is a hyperparameter and must be determined prior to training.",
"cite_spans": [
{
"start": 43,
"end": 46,
"text": "[i]",
"ref_id": null
},
{
"start": 89,
"end": 92,
"text": "[i]",
"ref_id": null
},
{
"start": 95,
"end": 98,
"text": "[i]",
"ref_id": null
},
{
"start": 147,
"end": 150,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Matrix Decomposition (SMD)",
"sec_num": "3.1"
},
{
"text": "This section introduces two techniques that can simultaneously diagonalize all slice matrices W [1] , . . . , W [i] , . . . , W [k] \u2208 R n\u00d7n . As described in (Liu et al., 2017) , we make use of the fact that if matrices V [1:k] form a commuting family: i.e., [i] , \u2200i, j \u2208 {1, 2, . . . , k}, they can be diagonalized by a shared orthogonal or unitary matrix. Both of the two techniques reduce the number of parameters of W [i] to O(n) from O(n 2 ).",
"cite_spans": [
{
"start": 96,
"end": 99,
"text": "[1]",
"ref_id": null
},
{
"start": 112,
"end": 115,
"text": "[i]",
"ref_id": null
},
{
"start": 128,
"end": 131,
"text": "[k]",
"ref_id": null
},
{
"start": 158,
"end": 176,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 222,
"end": 227,
"text": "[1:k]",
"ref_id": null
},
{
"start": 259,
"end": 262,
"text": "[i]",
"ref_id": null
},
{
"start": 423,
"end": 426,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Diagonalization",
"sec_num": "3.2"
},
{
"text": "V [i] V [j] = V [j] V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simultaneous Diagonalization",
"sec_num": "3.2"
},
{
"text": "Many NLP datasets contain symmetric patterns. For example, if binary relation (Bob, is relative of, Alice) holds in a knowledge graph, then (Alice, is relative of, Bob) should also hold in it. English phrases \"dog and cat\" and \"cat and dog\" have identical meaning. For symmetric structures, we can reasonably suppose that each slice matrix W [i] of a 3D tensor is symmetric because x T 1 W [i] x 2 must equal x T 2 W [i] x 1 . When W [i] \u2208 R n\u00d7n is symmetric, it can be diagonalized as:",
"cite_spans": [
{
"start": 342,
"end": 345,
"text": "[i]",
"ref_id": null
},
{
"start": 390,
"end": 393,
"text": "[i]",
"ref_id": null
},
{
"start": 417,
"end": 420,
"text": "[i]",
"ref_id": null
},
{
"start": 434,
"end": 437,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "W [i] = O [i] W [i] \u2032 O [i] T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "where O [i] \u2208 R n\u00d7n is an orthogonal matrix and W [i] \u2032 \u2208 R n\u00d7n is a diagonal matrix. Note that an orthogonal matrix O [i] may not be equal to O j if i \u0338 = j. However, if all of the slice matrices W [1] , . . . , W [i] , . . . , W [k] \u2208 R n\u00d7n are commuting, we can diagonalize every slice matrix with the same orthogonal matrix O. By substituting W [i] with OW [i] \u2032 O T into bilinear term x T 1 W [i] x 2 , we can rewrite it as follows:",
"cite_spans": [
{
"start": 8,
"end": 11,
"text": "[i]",
"ref_id": null
},
{
"start": 50,
"end": 53,
"text": "[i]",
"ref_id": null
},
{
"start": 119,
"end": 122,
"text": "[i]",
"ref_id": null
},
{
"start": 199,
"end": 202,
"text": "[1]",
"ref_id": null
},
{
"start": 215,
"end": 218,
"text": "[i]",
"ref_id": null
},
{
"start": 231,
"end": 234,
"text": "[k]",
"ref_id": null
},
{
"start": 349,
"end": 352,
"text": "[i]",
"ref_id": null
},
{
"start": 398,
"end": 401,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x T 1 W [i] x 2 = x T 1 OW [i] \u2032 O T x 2 = y T 1 W [i] \u2032 y 2 = \u27e8y 1 , w [i] , y 2 \u27e9",
"eq_num": "(2)"
}
],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "y 1 = O T x 1 , y 2 = O T x 2 , w [i] = diag(W [i] \u2032",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": ") \u2208 R n and \u27e8a, b, c\u27e9 denotes a \"triple inner product\" defined by \u27e8a, b, c\u27e9 = \u2211 n l=1 a l b l c l . This reduces the number of parameters in a single slice matrix from n 2 to n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthogonal Diagonalization",
"sec_num": "3.2.1"
},
{
"text": "Since most of the structures in the NLP data are not symmetric, the symmetric matrix assumption is usually violated. To obtain more expressive diagonal matrix, we regard each slice matrix W [i] as the real part of a complex matrix and consider its eigendecomposition.",
"cite_spans": [
{
"start": 190,
"end": 193,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "For any real matrix W [i] , there exists a complex normal matrix Z [i] whose real part is equal to it:",
"cite_spans": [
{
"start": 22,
"end": 25,
"text": "[i]",
"ref_id": null
},
{
"start": 67,
"end": 70,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "W [i] = \u211c ( Z [i] ) . \u211c (\u2022)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "represents an operation that takes the real part of a complex number, vector or matrix. Further, any complex normal matrix can be diagonalized by a unitary matrix. With these two properties, any real matrix W [i] can be diagonalized as follows (Trouillon et al., 2016) :",
"cite_spans": [
{
"start": 209,
"end": 212,
"text": "[i]",
"ref_id": null
},
{
"start": 244,
"end": 268,
"text": "(Trouillon et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "W [i] = \u211c ( Z [i] ) = \u211c ( U [i] Z [i] \u2032 U [i] * ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "Here,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "U [i] \u2208 C n\u00d7n is a unitary matrix, Z [i] \u2032 \u2208 C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "n\u00d7n is a diagonal matrix, and U [i] * is the conjugate transpose of U [i] . To guarantee that every slice matrix can be diagonalized with the same unitary matrix U instead of U [i] , we assume all of the normal matrices",
"cite_spans": [
{
"start": 32,
"end": 35,
"text": "[i]",
"ref_id": null
},
{
"start": 70,
"end": 73,
"text": "[i]",
"ref_id": null
},
{
"start": 177,
"end": 180,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "Z [1] , . . . , Z [i] , . . . , Z [k] \u2208 C n\u00d7n are commuting as in Section 3.2.1. Substituting \u211c ( U Z [i] \u2032 U * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "whose U is the same unitary matrix in all slice matrices, we can rewrite every bilinear term x T 1 W [i] x 2 as follows:",
"cite_spans": [
{
"start": 101,
"end": 104,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x T 1 W [i] x 2 = \u211c ( \u27e8y 1 , w [i] , y 2 \u27e9 ) = \u27e8\u211c(y 1 ), \u211c(w [i] ), \u211c(y 2 )\u27e9 + \u27e8\u211c(y 1 ), \u2111(w [i] ), \u2111(y 2 )\u27e9 + \u27e8\u2111(y 1 ), \u211c(w [i] ), \u2111(y 2 )\u27e9 \u2212 \u27e8\u2111(y 1 ), \u2111(w [i] ), \u211c(y 2 )\u27e9,",
"eq_num": "(3)"
}
],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "y 1 = U T x 1 , y 2 = U * x 2 , w [i] = diag(Z [i] \u2032 ) \u2208 C n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": ", and \u27e8y 1 , w [i] , y 2 \u27e9 is the triple Hermitian inner product of y 1 , w [i] and y 2 defined by \u27e8a, b, c\u27e9 = \u2211 n l=1 a l b l c l . This technique reduces the number of parameters of the matrices from n 2 to 2n. As shown in the right-hand side of Eq. (3), \u211c ( \u27e8y 1 , w [i] , y 2 \u27e9 ) can be replaced with three additions and a subtraction of the triple inner product of real vectors.",
"cite_spans": [
{
"start": 15,
"end": 18,
"text": "[i]",
"ref_id": null
},
{
"start": 76,
"end": 79,
"text": "[i]",
"ref_id": null
},
{
"start": 270,
"end": 273,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unitary Diagonalization",
"sec_num": "3.2.2"
},
{
"text": "This section introduces the baseline and our proposed models. After describing them, we explain how to extend them for handling compositional structures like binary trees. First, we describe a standard single layer neural network (NN) model for two vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "Model # of Parameters NN (2n + 1)k NTN (n 2 + 2n + 1)k NTN-SMD (2mn + 2n + 1)k NTN-Diag (3n + 1)k NTN-Comp (6n + 1)k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "x 1 , x 2 \u2208 R n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "The model uses linear mapping V \u2208 R k\u00d72n to combine two input vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "f (V [ x 1 x 2 ] + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "where b \u2208 R k is a bias term and f is a non-linear activation function. The NN model has only (2n+ 1)k parameters, and does not consider the direct interactions between x 1 and x 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Models",
"sec_num": "4"
},
{
"text": "Socher et al. 2013aproposed a neural tensor network (NTN) model that uses a 3D tensor W [1:k] \u2208 R n\u00d7n\u00d7k to combine two input vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network (NTN)",
"sec_num": null
},
{
"text": "f (x T 1 W [1:k] x 2 + V [ x 1 x 2 ] + b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network (NTN)",
"sec_num": null
},
{
"text": "Unlike the standard NN model, NTN can directly relate two input vectors using a tensor. However, it has too many parameters; (n 2 + 2n + 1)k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Tensor Network (NTN)",
"sec_num": null
},
{
"text": "Although the NTN model has tremendous expressive power, it is extremely time-consuming to compute, since a naive 3D tensor product incur O(n 2 k) computation time. To overcome this weakness, Zhao et al. (2015) and independently introduced simple matrix decomposition (SMD) to the NTN model by replacing each slice matrix W [i] with its factorized approximation given by Eq. (1):",
"cite_spans": [
{
"start": 191,
"end": 209,
"text": "Zhao et al. (2015)",
"ref_id": "BIBREF20"
},
{
"start": 323,
"end": 326,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-SMD",
"sec_num": null
},
{
"text": "f (x T 1 S [1:k] T [1:k] x 2 + V [ x 1 x 2 ] + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-SMD",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-SMD",
"sec_num": null
},
{
"text": "S [1:k] \u2208 R n\u00d7m\u00d7k , T [1:k] \u2208 R m\u00d7n\u00d7k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-SMD",
"sec_num": null
},
{
"text": "When m \u226a n, the NTN-SMD model drastically reduces the number of parameters compared to the original NTN model; i.e., from (n 2 + 2n + 1)k to (2mn + 2n + 1)k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-SMD",
"sec_num": null
},
{
"text": "In this paper, we introduce two new NTN models: NTN-Diag and NTN-Comp, both of which reduce the number of parameters in a 3D tensor more than NTN-SMD with little loss in the model's generalization performance. Table 1 summarizes the number of parameters in each model.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 217,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "NTNs with Diagonal Slice Matrices",
"sec_num": "4.2"
},
{
"text": "We replace all slice matrices W [i] of W [1:k] with the triple inner product formulation of Eq. (2) by assuming that they are symmetric and commuting. As a result, we derive the following new NTN formulation:",
"cite_spans": [
{
"start": 32,
"end": 35,
"text": "[i]",
"ref_id": null
},
{
"start": 41,
"end": 46,
"text": "[1:k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Diag",
"sec_num": null
},
{
"text": "f ( \uf8ee \uf8ef \uf8f0 \u27e8x 1 , w [1] , x 2 \u27e9 . . . \u27e8x 1 , w [k] , x 2 \u27e9 \uf8f9 \uf8fa \uf8fb + V [ x 1 x 2 ] + b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Diag",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Diag",
"sec_num": null
},
{
"text": "w [i] \u2208 R n , \u2200i \u2208 {1, 2, . . . , k}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Diag",
"sec_num": null
},
{
"text": "Thus, under the symmetric and commuting matrix constraints, we regard mapping by a 3D tensor as an array of k triple inner products. The total number of parameters is just (3n + 1)k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Diag",
"sec_num": null
},
{
"text": "By assuming that W [1] , . . . , W [i] , . . . , W [k] are real parts of normal matrices forming a commuting family, we can replace each slice matrix of a tensor term in NTN with the triple Hermitian inner product shown in Eq. (3):",
"cite_spans": [
{
"start": 19,
"end": 22,
"text": "[1]",
"ref_id": null
},
{
"start": 35,
"end": 38,
"text": "[i]",
"ref_id": null
},
{
"start": 51,
"end": 54,
"text": "[k]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": "f ( \uf8ee \uf8ef \uf8f0 \u211c ( \u27e8x 1 , w [1] , x 2 \u27e9 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": ". . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": "\u211c ( \u27e8x 1 , w [k] , x 2 \u27e9 ) \uf8f9 \uf8fa \uf8fb+\u211c ( V [ x 1 x 2 ]) +b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": "x 1 , x 2 \u2208 C n , V \u2208 C n\u00d7n and w [i] \u2208 C n , \u2200i \u2208 {1, 2, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": ". . , k}. Similar to NTN-Diag, we regard mapping by a 3D tensor as an array of k triple Hermitian inner products. The total number of parameters is just (6n + 1)k. As is clear of its form, NTN-Diag is a special case of NTN-Comp whose vectors x 1 , x 2 and w [i] are constrained to be real.",
"cite_spans": [
{
"start": 258,
"end": 261,
"text": "[i]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NTN-Comp",
"sec_num": null
},
{
"text": "We extend the above NTN models to handle compositional structures. As a representative of compositional structures, we consider a binary tree where each NTN layer computes a vector representation for a node by combining two vectors from its child nodes in the lower layer. Except for NTN-Comp, the models implement mappings R n \u2192 R k so that each of their layers can receive its lower layer's output directly, if k equals to n. Thus, the models do not have to be modified for them. However, NTN-Comp cannot receive its lower layer's output as it is because NTN-Comp is a mapping from C n to R k . To solve this problem, we set k to 2n and treat the output y \u2032 \u2208 R 2n as the concatenation of vectors representing the real and imaginary parts of y \u2208 C n :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Neural Tensor Networks",
"sec_num": "4.3"
},
{
"text": "\u211c(y) = (y \u2032 1 , \u2022 \u2022 \u2022 , y \u2032 n ), \u2111(y) = (y \u2032 n+1 , \u2022 \u2022 \u2022 , y \u2032 2n ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Neural Tensor Networks",
"sec_num": "4.3"
},
{
"text": "Note that this approach is valid since Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Neural Tensor Networks",
"sec_num": "4.3"
},
{
"text": "(3) can actually be defined in real vector space by transforming the complex vectors in C n into real vectors in R 2n .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Neural Tensor Networks",
"sec_num": "4.3"
},
{
"text": "In KGC, researchers usually design scoring function \u03a6 for the given triplet (s, r, o) to judge whether it is a fact or not. Here (s, r, o) denotes that entity s is linked to entity o by relation r. RESCAL (Nickel et al., 2011 ) uses e T s W r e o as \u03a6, where e s , e o are entity embedding vectors and W r is an embedding matrix of relation r. This bilinear operation is effective for the task, but its computational cost is high and it suffers from overfitting. To overcome these problems, DistMult (Yang et al., 2015) adopts the triple inner product \u27e8e s , w r , e o \u27e9 as \u03a6, where w r is an embedding vector of relation r. This solves those problems, but it degrades the model's ability to capture directionality of relations, because the scoring function of DistMult is symmetric with respect to s and o; i.e., \u27e8e s , w r , e o \u27e9 = \u27e8e o , w r , e s \u27e9. To reconcile the complexity and expressiveness of a model, ComplEx (Trouillon et al., 2016) uses complex vectors for entity and relation embeddings. As scoring function \u03a6, they adopted the triple Hermitian inner product \u211c (\u27e8e s , w r , e o \u27e9), where e o denotes the complex conjugate of e o . Since \u211c (\u27e8e s , w r , e o \u27e9) \u0338 = \u211c (\u27e8e o , w r , e s \u27e9), Com-plEx solves the expressiveness problem of Dist-Mult without full matrices as relation embeddings. We can regard DistMult as a special case of RESCAL with a symmetric matrix constraint on W r . ComplEx is also a RESCAL variant with W r as the real part of a normal matrix. Our research is based on these works, but to the best of our knowledge, no previous work applied this ap-proach to reduce the number of parameters in a tensor.",
"cite_spans": [
{
"start": 205,
"end": 225,
"text": "(Nickel et al., 2011",
"ref_id": "BIBREF14"
},
{
"start": 500,
"end": 519,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 922,
"end": 946,
"text": "(Trouillon et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work Knowledge Graph Completion",
"sec_num": "5"
},
{
"text": "To give additional expressiveness power to standard (R)NNs, many architectures have been proposed, such as LSTM (Hochreiter and Schmidhuber, 1997) , GRU (Cho et al., 2014) , and CNN (LeCun et al., 1998) . NTN (Socher et al., 2013a) and RNTN (Socher et al., 2013b) are other such architectures. However, (R)NTNs differ in that they only add 3D tensor mapping to standard neural networks. Thus, they can also be regarded as a powerful basic component of NNs because 3D tensor mapping can be applied to more complicated architectures such as those examples.",
"cite_spans": [
{
"start": 112,
"end": 146,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF8"
},
{
"start": 153,
"end": 171,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 178,
"end": 202,
"text": "CNN (LeCun et al., 1998)",
"ref_id": null
},
{
"start": 209,
"end": 231,
"text": "(Socher et al., 2013a)",
"ref_id": "BIBREF15"
},
{
"start": 241,
"end": 263,
"text": "(Socher et al., 2013b)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NN Architectures",
"sec_num": null
},
{
"text": "Several researchers reduced the number of parameters of NNs by using specific parameter sharing mechanisms. Cheng et al. (2015) used circulant matrix mapping instead of conventional linear mapping and improved the time complexity of the matrix-vector product by using Fast Fourier Transformation (FFT). Circulant matrix for w T = (w 1 , . . . , w n ) can be factorized into F \u22121 diag(F w) F with the Fourier matrix F. By assuming each slice matrix W [i] of W [1:k] is circulant, we get the same scoring function as that in Eq. (3); (Socher et al., 2013a) and NTN-SMD).",
"cite_spans": [
{
"start": 108,
"end": 127,
"text": "Cheng et al. (2015)",
"ref_id": "BIBREF4"
},
{
"start": 450,
"end": 453,
"text": "[i]",
"ref_id": null
},
{
"start": 459,
"end": 464,
"text": "[1:k]",
"ref_id": null
},
{
"start": 532,
"end": 554,
"text": "(Socher et al., 2013a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Reduction in NN",
"sec_num": null
},
{
"text": "C(w) = \uf8ee \uf8ef \uf8ef \uf8ef \uf8f0 w1 wn . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Reduction in NN",
"sec_num": null
},
{
"text": "x T 1 W [i] x 2 = x T 1 F \u22121 diag(F w [i] ) F x 2 = \u211c(\u27e8x \u2032 1 , w [i] \u2032 , x 2 \u2032 \u27e9) where x \u2032 1 = F x 1 , x \u2032 2 = F x 2 , and w [i] \u2032 = 1 n diag(F w [i] )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Reduction in NN",
"sec_num": null
},
{
"text": "Let E and R denote entities and relations, respectively. A relational triplet, or simply a triplet, (s, r, o) is a triple with s, o \u2208 E and r \u2208 R. It represents a proposition that relation r holds between subject entity s and object entity o. A triplet is called a fact if the proposition it denote is true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "A knowledge graph is a collection of knowledge triplets, with the understanding that all its member triplets are facts. It is called a graph because each triplet can be regarded as an edge in a directed graph; the vertices in this graph represent entities in E, and each edge is labeled by a relation in R. Let G be a knowledge graph, viewed as a collection of facts. Knowledge graph completion (KGC) is the task of predicting whether unknown triplet (s \u2032 , r \u2032 , o \u2032 ) \u0338 \u2208 G such that s \u2032 , o \u2032 \u2208 E, r \u2032 \u2208 R is a fact or not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "The standard approach to KGC is to design a score function \u03a6 : E \u00d7 R \u00d7 E \u2192 R that assigns a large value when a triplet seems to be a fact. Socher et al. (2013a) defined it as follows. We report Hits@n in the filtered setting. * Results are those in (Trouillon et al., 2016) we assume all slice matrices of tensors among relations form a commuting family. The loss function used to train the models is shown below:",
"cite_spans": [
{
"start": 139,
"end": 160,
"text": "Socher et al. (2013a)",
"ref_id": "BIBREF15"
},
{
"start": 249,
"end": 273,
"text": "(Trouillon et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "u T r f ( e T s W [1:k] r e o + V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "N \u2211 i=1 C \u2211 c=1 max ( 0, 1 \u2212 \u03a6 ( T (i) ) + \u03a6 ( T (i) c )) +\u03bb\u2225\u2126\u2225 2 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "where \u03bb\u2225\u2126\u2225 2 2 is an L2 regularization term, T (i) denotes the i-th example of training data of size N , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "T (i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "c is one of C randomly sampled negative examples for the i-th training example. We generated negative samples of a triplet (s, r, o) by corrupting its subject or object entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "We used the Wordnet (WN18) and Freebase (FB15k) datasets to verify the benefits of our proposed methods. The dataset statistics are given in Table 2 . We selected hyper-parameters based on Socher et al. (2013a) and Yang et al. (2015) : For all of the models, the size of mini-batches was set to 1000, the dimensionality of the entity vector to d = 100, and the regularization parameter to 0.0001; the tensor slice size was set to k = 4 for all models, except NTN for which we also tested with k = 1 to see the influence of the slice size on the performance. We performed 300 epochs of training for Wordnet and 100 on Freebase using Adagrad (Duchi et al., 2011) with the initial learning rate set to 0.1.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "Socher et al. (2013a)",
"ref_id": "BIBREF15"
},
{
"start": 215,
"end": 233,
"text": "Yang et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 640,
"end": 660,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": null
},
{
"text": "For evaluation, we removed the subject or object entity of each test example and then replaced it with all the entities in E. We computed the scores of these corrupted triplets and ranked them in descending order of scores. We here report the results collected in filtered and raw settings. In the filtered setting, given test example (s, r, o), we remove from the ranking all the other positive triplets that appear in either training, validation, or test dataset, whereas the raw metrics do not remove these triplets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": null
},
{
"text": "Experimental results are shown in Table 3 . We observe the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 41,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 The performance of NN and NTNs differs considerably; Apparently, NN is inadequate for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 By comparing the results of NTNs with different slice sizes, we see that k = 4 performs better than k = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 NTN-SMDs perform better than NN, but are all inferior to NTNs, although their results improved as m (the rank of decomposed matrices) is increased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 NTN-Diag achieved better results than NTN, although it has far fewer parameters than NTN and the datasets contain many unsymmetrical triplets. This demonstrates that NTN-Diag solves the overfitting problem of NTN without sacrificing the expressiveness power. NTN-Diag also has fewer parameters than the smallest (m = 1) NTN-SMD. Thus, we conclude that NTN-Diag is a better alternative of NTN than NTN-SMD is, in terms of both accuracy and computational cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "Conjunctive normal form m \u2227 i=1 n i \u2228 j=1 Aij Disjunctive normal form m \u2228 i=1 n i \u2227 j=1 Aij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "Entailment A \u228f B A \u2282 B Reverse entailment A \u2290 B A \u2283 B Equivalence A \u2261 B A = B Alternation A | B A \u2229 B = \u2205 \u2227 A \u222a B \u0338 = D Negation A \u2227 B A \u2229 B = \u2205 \u2227 A \u222a B = D Cover A \u2323 B A \u2229 B \u0338 = \u2205 \u2227 A \u222a B = D Independence A # B else",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 NTN-Comp outperformed NTN-Diag, showing that its flexible constraint on matrices yielded additional expressiveness. However, NTN-Diag and NTN-Comp do not exceed DistMult and ComplEx, respectively, in almost all measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "Although not shown in the table, in this experiment, NTN-Diag and NTN-Comp was, respectively, 3 and 1.7 times as fast as NTN to train.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "To validate the performance of our proposed models in a recursive neural network setting, we experimentally tested them by having them solve a semantic compositionality problem in logic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Logical Reasoning",
"sec_num": "6.2"
},
{
"text": "This task definition basically follows Bowman et al. 2015: Given a pair of artificially generated propositional logic formulas, classify the relation between the formulas into one of the seven basic semantic relations of natural logic (MacCartney and Manning, 2009) . Table 5 shows these seven relation types. The formulas consist of propositional variables, negation, and conjunction and disjunction connectives. Although Bowman et al. (2015) generated formulas with no constraint on its form, we restricted them to disjunctive normal not p3 \u2227 p3 p3 \u228f (p3 or p2) (p1 or(p2 or p4))) \u2290 (p2 and not p4) form (DNF) or conjunctive normal form (CNF) ( Table 4 ). Recall that any propositional formula can be transformed into these forms.",
"cite_spans": [
{
"start": 235,
"end": 265,
"text": "(MacCartney and Manning, 2009)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 268,
"end": 275,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 647,
"end": 654,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Task",
"sec_num": null
},
{
"text": "Following Bowman et al. 2015, we constructed a model that infers the relations between formula pairs, as described in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "The model consists of two layers: composition and comparison layers (Figure 1) . The composition layer outputs the embeddings of both left and right formulas by recursive neural networks. Subsequently, the comparison layer compares the two embeddings using a single layer neural network, and then a softmax classifier receives its output. In the composition layer, we set different parameters for and and or operations. As a loss function, we used cross entropy with L2 regularization and apply the NTNs in Section 4 to the comparison layer and uses RNTNs for as the composition layer.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 78,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Models and Loss Function",
"sec_num": null
},
{
"text": "In this experiment, an example is a pair of propositional formulas, and its class label is the seven relation types between the pair. We generated examples following the protocol described in Bowman et al. (2015) , with the exception that the formulas are restricted to CNF or DNF, as mentioned above. We obtained 62,589 training examples, 13,413 validation examples, and 55,150 test examples. Each formula in the training and validation examples contains up to four logical operators, whereas those in the test examples have Table 7 : Result of logical inference for Tests 1-12. Example in Test n has n logical operators in either or both left and right formulas. Each score is the average accuracy of five trials of the \u03bb that achieved best performance on validation set. \"Majority class\" denotes the ratio of the majority class (relation \"#\", i.e., Independence; see Table 5 ). ) and the output size of comparison layer is k = 75, and we used AdaDelta (Zeiler, 2012) for an optimizer. We searched for the best coefficient \u03bb of L2 regularization in \u03bb \u2208 {0.0001, 0.0003, 0.0005, 0.0007, 0.0009, 0.001}, whereas Bowman et al. (2015) set \u03bb to 0.001 for RNN and 0.0003 for RNTN.",
"cite_spans": [
{
"start": 192,
"end": 212,
"text": "Bowman et al. (2015)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 526,
"end": 533,
"text": "Table 7",
"ref_id": null
},
{
"start": 870,
"end": 877,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": null
},
{
"text": "The results are shown in Table 7 . From the table, we observe the following:",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 As with KGC, the large difference in performance between RNN and RNTN suggests that this logical reasoning task requires feature interactions to be captured 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 RNTN-Diag achieved the best accuracy except for Tests 2 and 12 and outperformed RNTN except for Test 2. This is not surprising because both and and or are symmetric: p 1 and p 2 equals p 2 and p 1 . This matches the tensor term in RNTN-Diag which is symmetric with respect to x 1 and x 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 RNTN-Comp was the second best except for Tests 1-3 and 10-12. For all tests, its accuracy was comparable with or superior to that of RNTN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "\u2022 RNTN-SMD (m = 1) was inferior to RNTN for most test sets, although some good results were observed with m = 1, 2, 3 on Tests 11 and 12. Indeed, except for Tests 9-12, RNTN-SMD (m = 1) was inferior even to RNN despite the larger number of parameters in RNTN-SMD. RNTN-SMD (m = 2) obtained better results than m = 1, but it is still worse than RNTN except for Tests 10-12. Further increase in m (m = 4, 8, 16) worsened the accuracy despite an increase of the number of parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "We also evaluated the stability of the model over different trials and hyperparameters. Table 8 shows the best average accuracy for each compared model (among all the tested \u03bb) on the validation set. The parenthesized figures (on the rightmost column) show the standard deviation over five independent trials used for computing the average, i.e., all five trials used the same \u03bb value that achieved the best average accuracy. We see that RNTN-SMDs have larger standard deviations than reason, we did not test TreeLSTM in this paper. Finally, Figure 3 shows that training times increase quadratically with dimension for RNTN that has O(n 2 k) parameters, but not for our methods, which have only O(nk) parameters.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 8",
"ref_id": "TABREF9"
},
{
"start": 542,
"end": 550,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Result",
"sec_num": null
},
{
"text": "We proposed two new parameter reduction methods for tensors in NTNs. The first method constrains the slice matrices to be symmetric, and the second assumes them to be normal matrices. In both methods, the number of a 3D tensor param- eters is reduced from O(n 2 k) to O(nk) after the constrained matrices are eigendecomposed. By removing the tensor's surplus parameters, our methods learn better and faster as was shown in experiments. 2 Future work will test the versatility of our proposals, RNTN-Diag and RNTN-Comp, in other tasks that deal with data sets exhibiting carious structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Bowman (2016) also evaluated TreeLSTM, but its advantage over RNN was unclear in their experiment. For that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code of the two experiments will be available at https://github.com/tkhrshhr",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polynomial semantic indexing",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Kunihiko",
"middle": [],
"last": "Sadamasa",
"suffix": ""
},
{
"first": "Yanjun",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Corinna",
"middle": [],
"last": "Cortes",
"suffix": ""
},
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems 22",
"volume": "",
"issue": "",
"pages": "64--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Bai, Jason Weston, David Grangier, Ronan Col- lobert, Kunihiko Sadamasa, Yanjun Qi, Corinna Cortes, and Mehryar Mohri. 2009. Polynomial se- mantic indexing. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22. Curran Associates, Inc., pages 64-72.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling natural language semantics in learned representations",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman. 2016. Modeling natural language semantics in learned representations. Ph.D. thesis, Stanford University.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recursive neural networks can learn logical semantics",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Christopher Potts, and Christo- pher D Manning. 2015. Recursive neural networks can learn logical semantics. Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality .",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Compressing neural networks with the hashing trick",
"authors": [
{
"first": "Wenlin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Tyree",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2285--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen. 2015. Compressing neural networks with the hashing trick. In Proceed- ings of the 32nd International Conference on Ma- chine Learning. pages 2285-2294.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An exploration of parameter redundancy in deep networks with circulant projections",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Felix",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rogerio",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Feris",
"suffix": ""
},
{
"first": "Alok",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Shi-Fu",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2857--2865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Cheng, Felix X Yu, Rogerio S Feris, Sanjiv Kumar, Alok Choudhary, and Shi-Fu Chang. 2015. An ex- ploration of parameter redundancy in deep networks with circulant projections. In Proceedings of the IEEE International Conference on Computer Vision. pages 2857-2865.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation pages",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation pages 1724-1734.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2121--2159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12(Jul):2121-2159.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the equivalence of holographic and complex embeddings for link prediction",
"authors": [
{
"first": "Katsuhiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Shimbo",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "554--559",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsuhiko Hayashi and Masashi Shimbo. 2017. On the equivalence of holographic and complex embed- dings for link prediction. Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics pages 554-559.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gradient-based learning applied to document recognition",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Haffner",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the IEEE",
"volume": "86",
"issue": "11",
"pages": "2278--2324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, L\u00e9on Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278-2324.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analogical inference for multi-relational embeddings",
"authors": [
{
"first": "Hanxiao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuexin",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 34th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2168--2178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embed- dings. In Proceedings of the 34th International Con- ference on Machine Learning. pages 2168-2178.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning context-sensitive word embeddings with neural tensor skip-gram model",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1284--1290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2015. Learning context-sensitive word embeddings with neural tensor skip-gram model. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence. pages 1284-1290.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning compact recurrent neural networks",
"authors": [
{
"first": "Zhiyun",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Sindhwani",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
}
],
"year": 2016,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE",
"volume": "",
"issue": "",
"pages": "5960--5964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyun Lu, Vikas Sindhwani, and Tara N Sainath. 2016. Learning compact recurrent neural net- works. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, pages 5960-5964.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An extended model of natural logic",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the eighth international conference on computational semantics",
"volume": "",
"issue": "",
"pages": "140--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on compu- tational semantics. Association for Computational Linguistics, pages 140-156.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A three-way model for collective learning on multi-relational data",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Nickel",
"suffix": ""
},
{
"first": "Hans-Peter",
"middle": [],
"last": "Volker Tresp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kriegel",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 28th international conference on machine learning",
"volume": "",
"issue": "",
"pages": "809--816",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th international conference on machine learn- ing. pages 809-816.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reasoning with neural tensor networks for knowledge base completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013a. Reasoning with neural ten- sor networks for knowledge base completion. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahra- mani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26. pages 926-934.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing. pages 1631-1642.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Complex embeddings for simple link prediction",
"authors": [
{
"first": "Th\u00e9o",
"middle": [],
"last": "Trouillon",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2071--2080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceed- ings of the 33rd International Conference on Ma- chine Learning. pages 2071-2080.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Embedding entities and relations for learning and inference in knowledge bases",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. International Conference on Learning Rep- resentations .",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adadelta: An adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1212.5701"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. Adadelta: An adaptive learn- ing rate method. arXiv preprint arXiv:1212.5701 .",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Phrase type sensitive tensor indexing model for semantic composition",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2195--2202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Zhao, Zhiyuan Liu, and Maosong Sun. 2015. Phrase type sensitive tensor indexing model for se- mantic composition. In Proceedings of the Twenty- Ninth AAAI Conference on Artificial Intelligence. pages 2195-2202.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Comparison and composition layers. not p 4 is treated as an embedding.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Sensitivity of accuracy to \u03bb. RNTN, RNTN-Diag and RNTN-Comp. This indicates that RNTN-SMD is a less reliable model. RNTN-SMDs are also unstable, not only within the same \u03bb, but also between different \u03bbs. Figure 2 describes how accuracies are impacted by \u03bbs. The top graph shows validation accuracies between different \u03bb values. RNTN, RNTN-Diag and RNTN-Comp are stable, whereas RNN and RNTN-SMDs have steep drops. The bottom one describes the accuracies for Test 12. This also shows that RNTN-SMDs are unstable and that RNTN-Diag achieves distinctive performances.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF4": {
"text": "Training times of the models.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "Comparison of the number of parameters among the models",
"type_str": "table",
"content": "<table><tr><td>4.1 Baseline Models</td></tr><tr><td>Neural Network (NN)</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"text": "Dataset statistics.",
"type_str": "table",
"content": "<table><tr><td>6 Experiment</td></tr><tr><td>6.1 Knowledge Graph Completion</td></tr><tr><td>To evaluate their performance for link prediction</td></tr><tr><td>on knowledge graphs, we compared our proposed</td></tr><tr><td>methods (NTN-Diag and NTN-Comp) to baseline</td></tr><tr><td>methods (NTN</td></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"text": "Here, e s , e o \u2208 R n are entity embeddings and W r , V r , b r , u r are parameters for each relation r. u r is a k-dimensional vector to map f 's output R k to R which indicates a score. f is the hyperbolic tangent. To compare the performances of the baselines and proposed models, we change the mapping before an activation. For NTN-SMD, we change term e T s W",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td>WN18</td><td/><td/><td/><td/><td>FB15K</td><td/><td/></tr><tr><td/><td colspan=\"2\">MRR</td><td/><td>Hits@</td><td/><td colspan=\"2\">MRR</td><td colspan=\"2\">Hits@</td><td/></tr><tr><td>model</td><td>Filter</td><td>Raw</td><td>1</td><td>3</td><td>10</td><td>Filter</td><td>Raw</td><td>1</td><td>3</td><td/><td>10</td></tr><tr><td>NN NTN (k = 1) NTN (k = 4)</td><td colspan=\"11\">0.111 0.106 0.740 0.512 67.6 78.4 85.2 0.347 0.188 24.1 39.3 55.2 7.0 11.7 18.3 0.259 0.165 17.9 28.1 41.7 0.754 0.530 69.3 79.5 86.3 0.380 0.198 27.1 43.0 59.2</td></tr><tr><td colspan=\"12\">NTN-SMD (m = 1) NTN-SMD (m = 2) NTN-SMD (m = 3) NTN-SMD (m = 10) 0.533 0.413 42.2 59.4 74.5 0.333 0.188 22.8 37.5 53.8 0.243 0.216 15.9 26.1 40.9 0.278 0.172 19.3 30.1 44.7 0.224 0.199 15.1 23.8 37.2 0.298 0.177 20.7 32.7 47.8 0.299 0.255 20.4 32.4 49.2 0.312 0.183 21.7 34.5 49.9 NTN-SMD (m = 25) 0.618 0.463 52.1 67.8 80.0 0.341 0.187 23.2 38.6 55.5</td></tr><tr><td>NTN-Diag NTN-Comp</td><td colspan=\"11\">0.824 0.590 74.8 89.6 92.7 0.443 0.238 31.5 51.2 68.5 0.857 0.610 80.1 90.9 93.1 0.490 0.246 36.3 56.7 71.9</td></tr><tr><td>DistMult * ComplEx *</td><td colspan=\"11\">0.822 0.532 72.8 91.4 93.6 0.654 0.242 54.6 73.3 82.4 0.941 0.587 93.6 94.5 94.7 0.692 0.242 59.9 75.9 84.0</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>[</td><td>e o e s</td><td>]</td><td>+ b r</td><td>)</td></tr><tr><td/><td/><td/><td/><td colspan=\"8\">[1:k] r apply NTN-Diag and NTN-Comp in this model, e o to e T s S [1:k] r T [1:k] r e o . To</td></tr></table>"
},
"TABREF5": {
"html": null,
"num": null,
"text": "Mean Reciprocal Rank (MRR) and Hits@n for the models tested on WN18 and FB15k. MRR is reported in the raw and filtered settings. Hits@n metrics are percentages of test examples that lie in the top n ranked results.",
"type_str": "table",
"content": "<table/>"
},
"TABREF6": {
"html": null,
"num": null,
"text": "Conjunctive and disjunctive normal forms in propositional logic. A ij is a literal, which is a propositional variable or its negation. For example, p 1 and \u00acp 2 are literal, but not \u00ac\u00acp 3 .",
"type_str": "table",
"content": "<table><tr><td>Name</td><td>Symbol</td><td>Set-theoretic definition</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"text": "Natural logic relations over formula pairs. A and B denote a formula in propositional logic.",
"type_str": "table",
"content": "<table/>"
},
"TABREF8": {
"html": null,
"num": null,
"text": "Short examples of type of formulas and their relations in datasets.",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Softmax classifier</td><td colspan=\"2\">P (\u2290) = 0.8</td><td/></tr><tr><td>Comparison N(T)N layer</td><td colspan=\"4\">(p 1 or (p 2 or p 4 )) vs (p 2 and not p 4 )</td></tr><tr><td/><td colspan=\"2\">(p 1 or (p 2 or p 4 ))</td><td colspan=\"2\">(p 2 and not p 4 )</td></tr><tr><td>Composition</td><td>or</td><td/><td>and</td><td/></tr><tr><td>RN(T)N layer</td><td>p 1</td><td>(p 2 or p 4 )</td><td>p 2</td><td>not p 4</td></tr><tr><td/><td/><td>or</td><td/><td/></tr><tr><td/><td>p 2</td><td>p 4</td><td/><td/></tr></table>"
},
"TABREF9": {
"html": null,
"num": null,
"text": "Average accuracy and standard deviation on the validation dataset. The reported values are average over the best-performing model \u03bb in each method.",
"type_str": "table",
"content": "<table><tr><td>up to 12 logical operators. Every formula consists</td></tr><tr><td>of up to four variables taken from six propositional</td></tr><tr><td>variables that are shared among all the examples.</td></tr><tr><td>Hyperparameters and optimization are based on</td></tr><tr><td>Bowman et al. (2015): Embedding size d = 25</td></tr><tr><td>(for RNN, d = 45</td></tr></table>"
}
}
}
}