Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:16.203022Z"
},
"title": "Compositional Distributional Semantics Models in Chunk-based Smoothed Tree Kernels",
"authors": [
{
"first": "Nghia",
"middle": [
"The"
],
"last": "Pham",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Trento",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lorenzo",
"middle": [],
"last": "Ferrone",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rome \"Tor Vergata\"",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rome \"Tor Vergata\"",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The field of compositional distributional semantics has proposed very interesting and reliable models for accounting the distributional meaning of simple phrases. These models however tend to disregard the syntactic structures when they are applied to larger sentences. In this paper we propose the chunk-based smoothed tree kernels (CSTKs) as a way to exploit the syntactic structures as well as the reliability of these compositional models for simple phrases. We experiment with the recognizing textual entailment datasets. Our experiments show that our CSTKs perform better than basic compositional distributional semantic models (CDSMs) recursively applied at the sentence level, and also better than syntactic tree kernels.",
"pdf_parse": {
"paper_id": "S14-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "The field of compositional distributional semantics has proposed very interesting and reliable models for accounting the distributional meaning of simple phrases. These models however tend to disregard the syntactic structures when they are applied to larger sentences. In this paper we propose the chunk-based smoothed tree kernels (CSTKs) as a way to exploit the syntactic structures as well as the reliability of these compositional models for simple phrases. We experiment with the recognizing textual entailment datasets. Our experiments show that our CSTKs perform better than basic compositional distributional semantic models (CDSMs) recursively applied at the sentence level, and also better than syntactic tree kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A clear interaction between syntactic and semantic interpretations for sentences is important for many high-level NLP tasks, such as question-answering, textual entailment recognition, and semantic textual similarity. Systems and models for these tasks often use classifiers or regressors that exploit convolution kernels (Haussler, 1999) to model both interpretations.",
"cite_spans": [
{
"start": 322,
"end": 338,
"text": "(Haussler, 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Convolution kernels are naturally defined on spaces where there exists a similarity function between terminal nodes. This feature has been used to integrate distributional semantics within tree kernels. This class of kernels is often referred to as smoothed tree kernels (Mehdad et al., 2010; Croce et al., 2011 ), yet, these models only use distributional vectors for words.",
"cite_spans": [
{
"start": 271,
"end": 292,
"text": "(Mehdad et al., 2010;",
"ref_id": "BIBREF12"
},
{
"start": 293,
"end": 311,
"text": "Croce et al., 2011",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Compositional distributional semantics models (CDSMs) on the other hand are functions mapping text fragments to vectors (or higher-order tensors) which then provide a distributional meaning for simple phrases or sentences. Many CDSMs have been proposed for simple phrases like nonrecursive noun phrases or verbal phrases (Mitchell and Lapata, 2008 ; Baroni and Zamparelli, 2010; Clark et al., 2008; Grefenstette and Sadrzadeh, 2011; . Non-recursive phrases are often referred to as chunks (Abney, 1996) , and thus, CDSMs are good and reliable models for chunks.",
"cite_spans": [
{
"start": 321,
"end": 347,
"text": "(Mitchell and Lapata, 2008",
"ref_id": "BIBREF13"
},
{
"start": 350,
"end": 378,
"text": "Baroni and Zamparelli, 2010;",
"ref_id": "BIBREF1"
},
{
"start": 379,
"end": 398,
"text": "Clark et al., 2008;",
"ref_id": "BIBREF2"
},
{
"start": 399,
"end": 432,
"text": "Grefenstette and Sadrzadeh, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 489,
"end": 502,
"text": "(Abney, 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present the chunk-based smoothed tree kernels (CSTK) as a way to merge the two approaches: the smoothed tree kernels and the models for compositional distributional semantics. Our approach overcomes the limitation of the smoothed tree kernels which only use vectors for words by exploiting reliable CDSMs over chunks. CSTKs are defined over a chunk-based syntactic subtrees where terminal nodes are words or word sequences. We experimented with CSTKs on data from the recognizing textual entailment challenge (Dagan et al., 2006) and we compared our CSTKs with other standard tree kernels and standard recursive CDSMs. Experiments show that our CSTKs perform better than basic compositional distributional semantic models (CDSMs) recursively applied at the sentence level and better than syntactic tree kernels.",
"cite_spans": [
{
"start": 527,
"end": 547,
"text": "(Dagan et al., 2006)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 describes the CSTKs. Section 3 reports on the experimental setting and on the results. Finally, Section 4 draws the conclusions and sketches the future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes the new class of kernels. We first introduce the notion of the chunk-based syntactic subtree. Then, we describe the recursive formulation of the class of kernels. Finally, we introduce the basic CDSMs we use and we introduce two instances of the class of kernels. A Chunk-based Syntactic Sub-Tree is a subtree of a syntactic tree where each non-terminal node dominating a contiguous word sequence is collapsed into a chunk and, as usual in chunks (Abney, 1996) , the internal structure is disregarded. For example, Figure 2 reports some chunk-based syntactic subtrees of the tree in Figure 1 . Chunks are represented with a pre-terminal node dominating a triangle that covers a word sequence. The first subtree represents the chunk covering the second NP and the node dominates the word sequence its:d final:n concert:n. The second subtree represents the structure of the whole sentence and one chunk, that is the first NP dominating the word sequence the:d rock:n band:n. The third subtree again represents the structure of the whole sentence split into two chunks without the verb. In the following sections, generic trees are denoted with the letter t and N (t) denotes the set of non-terminal nodes of tree t. Each non-terminal node n \u2208 N (t) has a label s n representing its syntactic tag. As usual for constituency-based parse trees, pre-terminal nodes are nodes that have a single terminal node as child. Terminal nodes of trees are words denoted with w:pos where w is the actual token and pos is its postag. The structure of these trees is represented as follows. Given a tree t, c i (n) denotes i-th child of a node n in the set of nodes N (t). The production rule headed in node n is prod(n), that is, given the node n with m children, prod(n) is:",
"cite_spans": [
{
"start": 470,
"end": 483,
"text": "(Abney, 1996)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 606,
"end": 614,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Chunk-based Smoothed Tree Kernels",
"sec_num": "2"
},
{
"text": "S h h h h h @ @ @ @ @ NP $ $ $ $ DT the:d NN rock:n NN band:n VP $ $ $ $ VBZ holds:v NP $ $ $ $ PRP its:p JJ final:j NN concert:n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "prod(n) = s n \u2192 s c 1 (n) . . . s cm(n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "Finally, for a node n in N (t), the function d(n) generates the word sequence dominated by the non-terminal node n in the tree t. For example, d(VP) in Figure 1 is holds:v its:p final:j concert:n.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "Chunk-based Syntactic Sub-Trees (CSSTs) are instead denoted with the letter \u03c4 . Differently from trees t, CSSTs have terminal nodes that can represent subsequences of words of the original sentence. The explicit syntactic structure of a CSST is the structure not falling in chunks and it is represented as s(\u03c4 ). For example, s(\u03c4 3 ) is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "S r r VBZ NP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "where \u03c4 3 is the third subtree of Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "Given a tree t, the set S(t) is defined as the set containing all the relevant CSSTs of the tree t. As for the tree kernels (Collins and Duffy, 2002) , the set S(t) contains all CSSTs derived from the subtrees of t such that if a node n belongs to a subtree t s , all the siblings of n in t belongs to t s . In other words, productions of the initial subtrees are complete. A CSST is obtained by collapsing in a single terminal nodes a contiguous sequence of words dominated by a single non-terminal node. Finally, \u2192 w n \u2208 R m represent the distributional vectors for words w n and f (w 1 . . . w k ) represents a compositional distributional semantics model applied to the word sequence w 1 . . . w k .",
"cite_spans": [
{
"start": 124,
"end": 149,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and preliminaries",
"sec_num": "2.1"
},
{
"text": "As usual, a tree kernel, although written in a recursive way, computes the following general equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "K(t 1 , t 2 ) = \u03c4i \u2208 S(t1) \u03c4j \u2208 S(t2) \u03bb |N (\u03c4 i )|+|N (\u03c4 j )| K F (\u03c4 i , \u03c4 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "(1) In our case, the basic similarity K F (t i , t j ) is defined to take into account the syntactic structure and the distributional semantic part. Thus, we define it as follows in line with what done with several other smoothed tree kernels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "K F (\u03c4 i , \u03c4 j ) = \u03b4(s(\u03c4 i ), s(\u03c4 j )) a \u2208 P T (\u03c4i) b \u2208 P T (\u03c4j ) f (a), f (b) where \u03b4(s(\u03c4 i ), s(\u03c4 j ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "is the Kroneker's delta function between the the structural part of two chunk-based syntactic subtrees, P T (\u03c4 ) are the nodes in \u03c4 directly covering a chunk or a word, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "\u2192 x , \u2192",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "y is the cosine similarity between the two vectors \u2192",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "x and \u2192 y . For example, given the chunk-based subtree \u03c4 3 in Figure 2 and",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "\u03c4 4 = S $ $ $ $ NP $ $ $ $ the:d orchestra:n VP 3 3 VBZ NP its:p show:n the similarity K F (\u03c4 3 , \u03c4 4 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "is: f (the:d orchestra:n), f (the:d rock:n band:n) \u2022 f (its:p show:n), f (its:p final:j concert:n) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "The recursive formulation of the Chunk-based Smoothed Tree Kernel (CSTK) is a bit more complex but very similar to the recursive formulation of the syntactic tree kernels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K(t 1 , t 2 ) = n1 \u2208 N (t1) n2 \u2208 N (t2) C(n 1 , n 2 )",
"eq_num": "(2)"
}
],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "C(n 1 , n 2 ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 f (d(n 1 )), f (d(n 2 )) if label(n 1 ) = label(n 2 ) and prod(n 1 ) = prod(n 2 ) f (d(n 1 )), f (d(n 2 )) + nc(n 1 ) j=1 (1 + C(c j (n 1 ), c j (n 2 ))) \u2212 nc(n 1 ) j=1 f (d(c j (n 1 ))), f (d(c j (n 2 )))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "if n 1 , n 2 are not pre-terminals and prod(n 1 ) = prod(n 2 ) 0 otherwise where nc(n 1 ) is the lenght of the production prod(n 1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothed Tree Kernels on Chunk-based Syntactic Trees",
"sec_num": "2.2"
},
{
"text": "To define specific CSTKs, we need to introduce the basic compositional distributional semantic models (CDSMs). We use two CDSMs: the Basic Additive model (BA) and teh Full Additive model (FA). We thus define two specific CSTKs: the CSTK+BA that is based on the basic additive model and the CSTK+FA that is based on the full additive model. We describe the two CDSMs in the following. The Basic Additive model (BA) (introduced in (Mitchell and Lapata, 2008)) computes the distibutional semantics vector of a pair of words a = a 1 a 2 as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "ADD(a 1 , a 2 ) = \u03b1 \u2192 a 1 + \u03b2 \u2192 a 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "where \u03b1 and \u03b2 weight the first and the second word of the pair. The basic additive model for word sequences s = w 1 . . . w k is recursively defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "f BA (s) = \u2192 w 1 if k = 1 \u03b1 \u2192 w 1 + \u03b2f BA (w 2 . . . w k ) if k > 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "The Full Additive model (FA) (used in (Guevara, 2010) for adjective-noun pairs and (Zanzotto et al., 2010) for three different syntactic relations) computes the compositional vector \u2192 a of a pair using two linear tranformations A R and B R respectively applied to the vectors of the first and the second word. These matrices generally only depends on the syntactic relation R that links those two words. The operation follows: The full additive model for word sequences s = w 1 . . . w k , whose node has a production rule s \u2192 s c 1 . . . s cm is also defined recursively:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "f F A (a 1 , a 2 , R) = A R \u2192 a 1 + B R \u2192 a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "f F A (s) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u2192 w 1 if k = 1 A vn \u2192 V + B vn f F A (N P ) if s \u2192 V N P A an \u2192 A + B an f F A (N ) if s \u2192 A N f F A (s c i ) otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "where A vn , B vn are matrices used for verb and noun phrase interaction, and A an , B an are used for adjective, noun interaction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "3 Experimental Investigation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Distributional Semantic Models and two Specific CSTKs",
"sec_num": "2.3"
},
{
"text": "We experimented with the Recognizing Textual Entailment datasets (RTE) (Dagan et al., 2006) . RTE is the task of deciding whether a long text T entails a shorter text, typically a single sentence, called hypothesis H. It has been often seen as a classification task (see (Dagan et al., 2013) ). We used four datasets: RTE1, RTE2, RTE3, and RTE5, with the standard split between training and testing. The dev/test distribution for RTE1-3, and RTE5 is respectively 567/800, 800/800, 800/800, and 600/600 T-H pairs. Distributional vectors are derived with DISSECT (Dinu et al., 2013 ) from a corpus obtained by the concatenation of ukWaC (wacky.sslmit.unibo.it), a mid-2009 dump of the English Wikipedia (en.wikipedia.org) and the British National Corpus (www.natcorp.ox.ac.uk), for a total of about 2.8 billion words. We collected a 35K-by-35K matrix by counting co-occurrence of the 30K most frequent content lemmas in the corpus (nouns, adjectives and verbs) and all the content lemmas occurring in the datasets within a 3 word window. The raw count vectors were transformed into positive Pointwise Mutual Information scores and reduced to 300 dimensions by Singular Value Decomposition. This setup was picked without tuning, as we found it effective in previous, unrelated experiments.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Dagan et al., 2006)",
"ref_id": "BIBREF6"
},
{
"start": 271,
"end": 291,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 561,
"end": 579,
"text": "(Dinu et al., 2013",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental set-up",
"sec_num": "3.1"
},
{
"text": "We built the matrices for the full additive models using the procedure described in (Guevara, 2010) . We considered only two relations: the Adjective-Noun and Verb-Noun. The full additive model falls back to the basic additional model when syntactic relations are different from these two.",
"cite_spans": [
{
"start": 84,
"end": 99,
"text": "(Guevara, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental set-up",
"sec_num": "3.1"
},
{
"text": "To build the final kernel to learn the classifier, we followed standard approaches (Dagan et al., 2013) , that is, we exploited two models: a model with only a rewrite rule feature space (RR) and a model with the previous space along with a token-level similarity feature (RRTWS). The two models use our CSTKs and the standard TKs in the following way as kernel functions:",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "(Dagan et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental set-up",
"sec_num": "3.1"
},
{
"text": "(1) RR(p 1 , p 2 ) = \u03ba(t a 1 , t a 2 ) + \u03ba(t b 1 , t b 2 ); (2) RRT W S(p 1 , p 2 ) = \u03ba(t a 1 , t a 2 ) + \u03ba(t b 1 , t b 2 ) + (T W S(a 1 , b 1 ) \u2022 T W S(a 2 , b 2 ) + 1) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental set-up",
"sec_num": "3.1"
},
{
"text": "where T W S is a weighted token similarity (as in (Corley and Mihalcea, 2005) ). Table 1 shows the results of the experiments, the table is organised as follows: columns 2-6 report the accuracy of the RTE systems based on rewrite rules (RR) and columns 7-11 report the accuracies of RR systems along with token similarity (RRTS). We compare five differente models: ADD is the Basic Additive model with parameters \u03b1 = \u03b2 = 1 (as defined in 2.3) applied to the words of the sentence (without considering its tree structure), the same is done for the Full Additive (Ful-lADD), defined as in 2.3. The Tree Kernel (TK) as defined in (Collins and Duffy, 2002) are applied to the constituency-based tree representation of the tree, without the intervening collapsing step described in 2.2. These three models are the baseline against which we compare the CSTK models where the collapsing procedure is done via Basic Additive (CSTK + BA, again with \u03b1 = \u03b2 = 1) and FullAdditive (CSTK + FA), as described in section 2.2, again, with the aforementioned restriction on the relation considered. For RR models we have that CSTK+BA and CSTK+FA both achieve higher accuracy than ADD and FullAdd, with a statistical significante greater than 93.7%, as computed with the sign test. Specifically we have that CSTK+BA has an average accuracy 7.94% higher than ADD and 5.89% higher than FullADD, while CSTK+FA improves on ADD and FullADD by 8.52% and 6.46%, respectively. The same trend is visible for the RRTS model, again both models are statistically better than ADD and FullADD, in this case we have that CSTK+BA is 8.63% more accurate then ADD and 2.11% more than FullADD, CSTK+FA is respectively 8.98% and 2.43% more accurate than ADD and FullADD. As for the TK models we have that both CSTK models achieve again an higher average accuracy: for RR models CSTK+BA and CSTK+FA are respectively 2.01% and 0.15% better than TK, while for RRTS models the number are 2.54% and 0.47%. These results though are not statistically significant, as is the difference between the two CSTK models themselves.",
"cite_spans": [
{
"start": 50,
"end": 77,
"text": "(Corley and Mihalcea, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 627,
"end": 652,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental set-up",
"sec_num": "3.1"
},
{
"text": "In this paper, we introduced a novel sub-class of the convolution kernels in order exploit reliable compositional distributional semantic models along with the syntactic structure of sentences. Experiments show that this novel subclass, namely, the Chunk-based Smoothed Tree Kernels (CSTKs), are a promising solution, performing significantly better than a naive recursive application of the compositional distributional semantic models. We experimented with CSTKS equipped with the basic additive and the full additive CDSMs but these kernels are definitely open to all the CDSMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "We acknowledge ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "editor, Corpus-based methods in language and speech",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1996. Part-of-speech tagging and par- tial parsing. In G.Bloothooft K.Church, S.Young, editor, Corpus-based methods in language and speech. Kluwer academic publishers, Dordrecht.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1183--1193",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183-1193, Cambridge, MA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A compositional distributional model of meaning",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second Symposium on Quantum Interaction (QI-2008)",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distributional model of meaning. Proceedings of the Second Symposium on Quantum Interaction (QI-2008), pages 133-140.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Nigel Duffy. 2002. New rank- ing algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Pro- ceedings of ACL02.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Measuring the semantic similarity of texts",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment",
"volume": "",
"issue": "",
"pages": "13--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Corley and Rada Mihalcea. 2005. Measur- ing the semantic similarity of texts. In Proc. of the ACL Workshop on Empirical Modeling of Seman- tic Equivalence and Entailment, pages 13-18. As- sociation for Computational Linguistics, Ann Arbor, Michigan, June.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Structured lexical similarity via convolution kernels on dependency trees",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1034--1046",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via con- volution kernels on dependency trees. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 1034-1046, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "3944",
"issue": "",
"pages": "177--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Quionero-Candela et al., editor, LNAI 3944: MLCW 2005, pages 177-190. Springer- Verlag, Milan, Italy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Recognizing Textual Entailment: Models and Applications. Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Sammons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Dan Roth, Mark Sammons, and Fabio Mas- simo Zanzotto. 2013. Recognizing Textual Entail- ment: Models and Applications. Synthesis Lectures on Human Language Technologies. Morgan & Clay- pool Publishers.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "DISSECT: DIStributional SEmantics Composition Toolkit",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nghia The",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL (System Demonstrations)",
"volume": "",
"issue": "",
"pages": "31--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013. DISSECT: DIStributional SEmantics Com- position Toolkit. In Proceedings of ACL (System Demonstrations), pages 31-36, Sofia, Bulgaria.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Experimental support for a categorical compositional distributional model of meaning",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1394--1404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11, pages 1394-1404, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A regression model of adjective-noun compositionality in distributional semantics",
"authors": [
{
"first": "Emiliano",
"middle": [],
"last": "Guevara",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "33--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emiliano Guevara. 2010. A regression model of adjective-noun compositionality in distributional se- mantics. In Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Seman- tics, pages 33-37, Uppsala, Sweden, July. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Convolution kernels on discrete structures",
"authors": [
{
"first": "David",
"middle": [],
"last": "Haussler",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Haussler. 1999. Convolution kernels on discrete structures. Technical report, University of Califor- nia at Santa Cruz.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Syntactic/semantic structures for textual entailment recognition",
"authors": [
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "1020--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yashar Mehdad, Alessandro Moschitti, and Fabio Mas- simo Zanzotto. 2010. Syntactic/semantic struc- tures for textual entailment recognition. In Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, HLT '10, pages 1020-1028, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Estimating linear models for compositional distributional semantics",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Massimo Zanzotto",
"suffix": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Korkontzelos",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Fallucchi",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distribu- tional semantics. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics (COLING), August,.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Sample Syntactic Tree",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Some Chunk-based Syntactic Sub-Trees of the tree inFigure 1",
"num": null,
"uris": null
},
"TABREF2": {
"text": "Task-based analysis: Accuracy on Recognizing Textual Entailment ( \u2020 is different from both ADD and FullADD with a stat.sig. of p > 0.1.)",
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}