Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:26.114560Z"
},
"title": "UCAM-CORE: Incorporating structured distributional similarity into STS",
"authors": [
{
"first": "Tamara",
"middle": [],
"last": "Polajnar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {
"postCode": "CB3 0FD",
"settlement": "Cambridge",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Laura",
"middle": [],
"last": "Rimell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {
"postCode": "CB3 0FD",
"settlement": "Cambridge",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Computer Laboratory University of Cambridge",
"location": {
"postCode": "CB3 0FD",
"settlement": "Cambridge",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes methods that were submitted as part of the *SEM shared task on Semantic Textual Similarity. Multiple kernels provide different views of syntactic structure, from both tree and dependency parses. The kernels are then combined with simple lexical features using Gaussian process regression, which is trained on different subsets of training data for each run. We found that the simplest combination has the highest consistency across the different data sets, while introduction of more training data and models requires training and test data with matching qualities.",
"pdf_parse": {
"paper_id": "S13-1011",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes methods that were submitted as part of the *SEM shared task on Semantic Textual Similarity. Multiple kernels provide different views of syntactic structure, from both tree and dependency parses. The kernels are then combined with simple lexical features using Gaussian process regression, which is trained on different subsets of training data for each run. We found that the simplest combination has the highest consistency across the different data sets, while introduction of more training data and models requires training and test data with matching qualities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semantic Textual Similarity (STS) shared task consists of several data sets of paired passages of text. The aim is to predict the similarity that human annotators have assigned to these aligned pairs. Text length and grammatical quality vary between the data sets, so our submissions to the task aimed to investigate whether models that incorporate syntactic structure in similarity calculation can be consistently applied to diverse and noisy data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We model the problem as a combination of kernels (Shawe-Taylor and Cristianini, 2004) , each of which calculates similarity based on a different view of the text. State-of-the-art results on text classification have been achieved with kernel-based classification algorithms, such as the support vector machine (SVM) (Joachims, 1998) , and the methods here can be adapted for use in multiple kernel classification, as in Polajnar et al. (2011) . The kernels are combined using Gaussian process regression (GPR) (Rasmussen and Williams, 2006) . It is important to note that the combination strategy described here is only a different way of viewing the regressioncombined mixture of similarity measures approach that is already popular in STS systems, including several that participated in previous SemEval tasks (Croce et al., 2012; B\u00e4r et al., 2012) . Likewise, others, such as Croce et al. (2012) , have used tree and dependency parse information as part of their systems; however, we use a tree kernel approach based on a novel encoding method introduced by Zanzotto et al. (2011) and from there derive two dependencybased methods.",
"cite_spans": [
{
"start": 49,
"end": 85,
"text": "(Shawe-Taylor and Cristianini, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 316,
"end": 332,
"text": "(Joachims, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 420,
"end": 442,
"text": "Polajnar et al. (2011)",
"ref_id": "BIBREF9"
},
{
"start": 510,
"end": 540,
"text": "(Rasmussen and Williams, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 812,
"end": 832,
"text": "(Croce et al., 2012;",
"ref_id": "BIBREF2"
},
{
"start": 833,
"end": 850,
"text": "B\u00e4r et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 879,
"end": 898,
"text": "Croce et al. (2012)",
"ref_id": "BIBREF2"
},
{
"start": 1061,
"end": 1083,
"text": "Zanzotto et al. (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of this paper we will describe our system, which consists of distributional similarity (Section 2.1), several kernel measures (Section 2.2), and a combination method (Section 2.3). This will be followed by the description of our three submissions (Section 3), and a discussion of the results (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At the core of all the kernel methods is either surface, distributional, or syntactic similarity between sentence constituents. The methods themselves encode sentences into vectors or sets of vectors, while the similarity between any two vectors is calculated using cosine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Target words are the non-stopwords that occur within our training and test data. The two distributional methods we use here both represent target words as vectors that encode word occurrence within a set of contexts. The first method is a variation on BEAGLE (Jones and Mewhort, 2007) , which considers contexts to be words that surround targets. The second method is based on ESA (Gabrilovich and Markovitch, 2007) , which considers contexts to be Wikipedia documents that contain target words.",
"cite_spans": [
{
"start": 259,
"end": 284,
"text": "(Jones and Mewhort, 2007)",
"ref_id": "BIBREF5"
},
{
"start": 381,
"end": 415,
"text": "(Gabrilovich and Markovitch, 2007)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Similarity",
"sec_num": "2.1"
},
{
"text": "To gather the distributional data with both of these approaches we used 316,305 documents from the September 2012 snapshot of Wikipedia. The training corpus for BEAGLE is generated by pooling the top 20 documents retrieved by querying the Wikipedia snapshot index for each target word in the training and test data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Similarity",
"sec_num": "2.1"
},
{
"text": "Random indexing (Kaski, 1998) is a technique for dimensionality reduction where pseudo-orthogonal bases are generated by randomly sampling a distribution. BEAGLE is a model where random indexing is used to represent word co-occurrence vectors in a distributional model.",
"cite_spans": [
{
"start": 16,
"end": 29,
"text": "(Kaski, 1998)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BEAGLE",
"sec_num": "2.1.1"
},
{
"text": "Each context word is represented as a Ddimensional vector of normally distributed random values drawn from the Gaussian distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BEAGLE",
"sec_num": "2.1.1"
},
{
"text": "N (0, \u03c3 2 ), where \u03c3 = 1 \u221a D and D = 4096 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BEAGLE",
"sec_num": "2.1.1"
},
{
"text": "A target word is represented as the sum of the vectors of all the context words that occur within a certain context window around the target word. In BEAGLE this window is considered to be the sentence in which the target word occurs; however, to avoid segmenting the entire corpus, we assume the window to include 5 words to either side of the target. This method has the advantage of keeping the dimensionality of the context space constant even if more context words are added, but we limit the context words to the top 10,000 most frequent nonstopwords in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BEAGLE",
"sec_num": "2.1.1"
},
{
"text": "ESA represents a target word as a weighted ranked list of the top N documents that contain the word, retrieved from a high quality collection. We used the BM25F (Robertson et al., 2004) weighting function and the top N = 700 documents. These parameters were chosen by testing on the WordSim353 dataset. 1 The list of retrieved documents can be represented as a very sparse vector whose dimensions match the number of documents in the collection, or in a more computationally efficient manner as a hash map linking document identifiers to the retrieval weights. Similarity between lists was calculated using the cosine measure augmented to work on the hash map data type.",
"cite_spans": [
{
"start": 161,
"end": 185,
"text": "(Robertson et al., 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ESA",
"sec_num": "2.1.2"
},
{
"text": "In our experiments we use six basic kernel types, which are described below. Effectively we have eight kernels, because we also use the tree and dependency kernels with and without distributional information. Each kernel is a function which is passed a pair of short texts, which it then encodes into a specific format and compares using a defined similarity function. LK uses the regular cosine similarity function, but LEK, TK, DK, MDK, DGK use the following cosine similarity redefined for sets of vectors. If the texts are represented as sets of vectors X and Y , the set similarity kernel function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03ba set (X, Y ) = i j cos( x i , y j )",
"eq_num": "(2)"
}
],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "and normalisation is accomplished in the standard way for kernels by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03ba set\u2212n (X, Y ) = \u03ba set (X, Y ) (\u03ba set (X, X)\u03ba set (Y, Y ))",
"eq_num": "(3)"
}
],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "LK -The lexical kernel calculates the overlap between the tokens that occur in each of the paired texts, where the tokens consist of Porter stemmed (Porter, 1980) non-stopwords. Each text is represented as a frequency vector of tokens that occur within it and the similarity between the pair is calculated using cosine.",
"cite_spans": [
{
"start": 148,
"end": 162,
"text": "(Porter, 1980)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "LEK -The lexical ESA kernel represents each example in the pair as the set of words that do not occur in the intersection of the two texts. The similarity is calculated as in Equation (3) with X and Y being the ESA vectors of each word from the first and second text representations, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "TK -The tree kernel representation is based on the definition by Zanzotto et al. (2011) . Briefly, each piece of text is parsed 2 ; the non-terminal nodes of the parse tree, stopwords, and out-ofdictionary terms are all assigned a new random vector (Equation 1); while the leaves that occurred in the BEAGLE training corpus are assigned their learned distributional vectors (Section 2.1.1).",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "Zanzotto et al. (2011)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "Each subtree of a tree is encoded recursively as a vector, where the distributional vectors representing each node are combined using the circular convolution operator (Plate, 1994; Jones and Mewhort, 2007) . The whole tree is represented as a set of vectors, one for each subtree.",
"cite_spans": [
{
"start": 168,
"end": 181,
"text": "(Plate, 1994;",
"ref_id": "BIBREF8"
},
{
"start": 182,
"end": 206,
"text": "Jones and Mewhort, 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "DK -The dependency kernel representation encodes each dependency pair as a separate vector, discounting the labels. The non-stopword terminals are represented as their distributional vectors, while the stopwords and out-of-dictionary terms are given a unique random vector. The vector for the dependency pair is obtained via a circular convolution of the individual word vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "MDK -The multiple dependency kernel is constructed like the dependency kernel, but similarity is calculated separately between all the the pairs that share the same dependency label. The combined similarity for all dependency labels in the parse is then calculated using least squares linear regression. While at the later stage we use GPR to combine all of the different kernels, for MDK we found that linear regression provided better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "DGK -The depgram kernel represents each dependency pair as an ESA vector obtained by searching the ESA collection for the two words in the dependency pair joined by the AND operator. The DGK representation only contains the dependencies that occur in one similarity text or the other, but not in both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Measures",
"sec_num": "2.2"
},
{
"text": "Each of the kernel measures above is used to calculate a similarity score between a pair of texts. The different similarity scores are then combined using 2 Because many of the datasets contained incomplete or ungrammatical sentences, we had to approximate some parses. The parsing was done using the Stanford parser (Klein and Manning, 2003) , which failed on some overly long sentences, which we therefore segmented at conjunctions or commas. Since our methods only compared subtrees of parses, we simply took the union of all the partial parses for a given sentence.",
"cite_spans": [
{
"start": 317,
"end": 342,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.3"
},
{
"text": "Gaussian process regression (GPR) (Rasmussen and Williams, 2006) . GPR is a probabilistic regression method where the weights are modelled as Gaussian random variables. GPR is defined by a covariance function, which is akin to the kernel function in the support vector machine. We used the squared exponential isotropic covariance function (also known as the radial basis function):",
"cite_spans": [
{
"start": 34,
"end": 64,
"text": "(Rasmussen and Williams, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.3"
},
{
"text": "cov(x i , x j ) = p 2 1 e (x i \u2212x j ) T \u2022(p 2 * I) \u22121 \u2022(x i \u2212x j ) 2 + p 2 3 \u03b4 ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.3"
},
{
"text": "with parameters p 1 = 1, p 2 = 1, and p 3 = 0.01. We found that training for parameters increased overfitting and produced worse results in validation experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Regression",
"sec_num": "2.3"
},
{
"text": "We submitted three runs. This is not sufficient for a full evaluation of the new methods we proposed here, but it gives us an inkling of general trends. To choose the composition of the submissions, we used STS 2012 training data for training, and STS 2012 test data for validation (Agirre et al., 2012) . The final submitted runs also used some of the STS 2012 test data for training. Basic -With this run we were examining if a simple introduction of syntactic structure can improve over the baseline performance. We trained a GPR combination of the linear and tree kernels (LK-TK) on the MSRpar training data. In validation experiments we found that this data set in general gave the most consistent performance for regression training.",
"cite_spans": [
{
"start": 282,
"end": 303,
"text": "(Agirre et al., 2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": "3"
},
{
"text": "Custom -Here we tried to approximate the best training setup for each type of data. We only had training data for OnWN and for this dataset we were able to improve over the LK-TK setup; however, the settings for the rest of the data sets were guesses based on observations from the validation experiments and overall performed poorly. OnWN was trained on MSRpar train with LK and DK. The headlines model was trained on MSRpar train and Europarl test, with LK-LEK-TK-DK-TKND-DKND-MDK (trained on Europarl). 3 All -As in the LK-TK experiment, we used the same model on all of the data sets. It was trained on all of the training data except MSRvid, using all eight kernel types defined above. In summary we used the LK-LEK-TK-TKND-DK-DKND-MDK-DGK kernel combination. MDK was trained on the 2012 training portion of MSRpar.",
"cite_spans": [
{
"start": 506,
"end": 507,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": "3"
},
{
"text": "From the shared task results in Table 1 , we can see that Basic is our highest ranked run. It has also achieved the best performance on all data sets. The LK on its own improves slightly on the task baseline by removing stop words and using stemming, while the introduction of TK contributes syntactic and distributional information. With the Custom run, we were trying to manually estimate which training data would best reflect properties of particular test data, and to customise the kernel combination through validation experiments. The only data set for which this led to an improvement is OnWN, indicating that customised settings can be beneficial, but that a more scientific method for matching of training and test data properties is required. In the All run, we were examining the effects that maximising the amount of training data and the number of kernel measures has on the output predictions. The results show that swamping the regression with models and training data leads to overly normalised output and a decrease in performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "While the evaluation measure, Pearson correlation, does not take into account the shape of the output distribution, Figure 1 shows that this information may be a useful indicator of model quality and behaviour. In particular, the role of the regression component in our approach is to learn a transformation from the output distributions of the models to the distribution of the training data gold standard. This makes it sensitive to the choice of training data, which ideally would have similar characteristics to the individual kernels, as well as a similar gold standard distribution to the test data. We can see in Figure 1 that the training data and choice of kernels influence the output distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 620,
"end": 626,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Analysis of the minimum, first quartile, median, third quartile, and maximum statistics of the distributions in Figure 1 demonstrates that, while it is difficult to visually evaluate the similarities of the different distributions, the smallest squared error is between the gold standard and the Custom run. This suggests that properties other than the rank order may also be good indicators in training and testing of STS methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "http://www.cs.technion.ac.il/\u02dcgabr/resources/ data/wordsim353/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "TKND and DKND are the versions of the tree and dependency kernels where no distributional vectors were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Tamara Polajnar is supported by the ERC Starting Grant, DisCoTex, awarded to Stephen Clark, and Laura Rimell and Douwe Kiela by EPSRC grant EP/I037512/1: A Unified Model of Compositional and Distributional Semantics: Theory and Applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computa- tional Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Se- mantic Evaluation (SemEval 2012), pages 385-393, Montr\u00e9al, Canada, 7-8 June. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "UKP: Computing semantic textual similarity by combining multiple content similarity measures",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "B\u00e4r",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Torsten",
"middle": [],
"last": "Zesch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "435--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: Computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. In Proceedings of the 6th International Work- shop on Semantic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computa- tional Semantics, pages 435-440, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "UNITOR: Combining semantic text similarity functions through sv regression",
"authors": [
{
"first": "Danilo",
"middle": [],
"last": "Croce",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Annesi",
"suffix": ""
},
{
"first": "Valerio",
"middle": [],
"last": "Storch",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th International Workshop on Semantic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computational Semantics",
"volume": "",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danilo Croce, Paolo Annesi, Valerio Storch, and Roberto Basili. 2012. UNITOR: Combining semantic text similarity functions through sv regression. In Pro- ceedings of the 6th International Workshop on Seman- tic Evaluation, held in conjunction with the 1st Joint Conference on Lexical and Computational Semantics, pages 597-602, Montr\u00e9al, Canada, 7-8 June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Computing semantic relatedness using wikipedia-based explicit semantic analysis",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Shaul",
"middle": [],
"last": "Markovitch",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th international joint conference on Artifical intelligence, IJCAI'07",
"volume": "",
"issue": "",
"pages": "1606--1611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Evgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting semantic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intel- ligence, IJCAI'07, pages 1606-1611, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Text categorization with suport vector machines: Learning with many relevant features",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 10th European Conference on Machine Learning, ECML '98",
"volume": "",
"issue": "",
"pages": "137--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 1998. Text categorization with su- port vector machines: Learning with many relevant features. In Proceedings of the 10th European Con- ference on Machine Learning, ECML '98, pages 137- 142, London, UK, UK. Springer-Verlag.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Representing word meaning and order information in a composite holographic lexicon. Psychological Review",
"authors": [
{
"first": "N",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Douglas",
"middle": [
"J K"
],
"last": "Jones",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mewhort",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "114",
"issue": "",
"pages": "1--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael N. Jones and Douglas J. K. Mewhort. 2007. Representing word meaning and order information in a composite holographic lexicon. Psychological Re- view, 114:1-37.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dimensionality reduction by random mapping: fast similarity computation for clustering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kaski",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1998 IEEE International Joint Conference on Neural Networks",
"volume": "1",
"issue": "",
"pages": "413--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kaski. 1998. Dimensionality reduction by random mapping: fast similarity computation for clustering. In Proceedings of the 1998 IEEE International Joint Conference on Neural Networks, volume 1, pages 413-418 vol.1, May.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st An- nual Meeting on Association for Computational Lin- guistics -Volume 1, ACL '03, pages 423-430, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distributed Representations and Nested Compositional Structure",
"authors": [
{
"first": "T",
"middle": [
"A"
],
"last": "Plate",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. A. Plate. 1994. Distributed Representations and Nested Compositional Structure. Ph.D. thesis, Univer- sity of Toronto.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Protein interaction sentence detection using multiple semantic kernels",
"authors": [
{
"first": "T",
"middle": [],
"last": "Polajnar",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Damoulas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Girolami",
"suffix": ""
}
],
"year": 2011,
"venue": "J Biomed Semantics",
"volume": "2",
"issue": "1",
"pages": "1--1",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T Polajnar, T Damoulas, and M Girolami. 2011. Protein interaction sentence detection using multiple semantic kernels. J Biomed Semantics, 2(1):1-1.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An algorithm for suffix stripping",
"authors": [
{
"first": "M",
"middle": [
"F"
],
"last": "Porter",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "14",
"issue": "",
"pages": "130--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130-137, July.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Gaussian Processes for Machine Learning",
"authors": [
{
"first": "C",
"middle": [
"E"
],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "C",
"middle": [
"K I"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. E. Rasmussen and C. K. I. Williams. 2006. Gaussian Processes for Machine Learning. MIT Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Simple BM25 extension to multiple weighted fields",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the thirteenth ACM international conference on Information and knowledge management, CIKM '04",
"volume": "",
"issue": "",
"pages": "42--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Robertson, Hugo Zaragoza, and Michael Taylor. 2004. Simple BM25 extension to multiple weighted fields. In Proceedings of the thirteenth ACM interna- tional conference on Information and knowledge man- agement, CIKM '04, pages 42-49, New York, NY, USA. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Kernel Methods for Pattern Analysis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Shawe",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed structures and distributional meaning",
"authors": [
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Dell'arciprete",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Distributional Semantics and Compositionality, DiSCo '11",
"volume": "",
"issue": "",
"pages": "10--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Massimo Zanzotto and Lorenzo Dell'Arciprete. 2011. Distributed structures and distributional mean- ing. In Proceedings of the Workshop on Distributional Semantics and Compositionality, DiSCo '11, pages 10-15, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "FNWN was trained on MSRpar train and OnWN test with LK-LEK-DGK-TK-DK-TKND-DKND. Finally, the SMT model Score distributions of different runs on the OnWN dataset was trained on MSRpar train and Europarl test with LK-LEK-TK-DK-TKND-DKND-MDK (trained on MSRpar).",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
}
}
}
}