Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S17-2014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:28:45.031158Z"
},
"title": "DT Team at SemEval-2017 Task 1: Semantic Similarity Using Alignments, Sentence-Level Embeddings and Gaussian Mixture Model Output",
"authors": [
{
"first": "Nabin",
"middle": [],
"last": "Maharjan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis Memphis",
"location": {
"region": "TN",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis Memphis",
"location": {
"region": "TN",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Dipesh",
"middle": [],
"last": "Gautam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis Memphis",
"location": {
"region": "TN",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Lasang",
"middle": [
"J"
],
"last": "Tamang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis Memphis",
"location": {
"region": "TN",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Vasile",
"middle": [],
"last": "Rus",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Memphis Memphis",
"location": {
"region": "TN",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe our system (DT Team) submitted at SemEval-2017 Task 1, Semantic Textual Similarity (STS) challenge for English (Track 5). We developed three different models with various features including similarity scores calculated using word and chunk alignments, word/sentence embeddings, and Gaussian Mixture Model (GMM). The correlation between our system's output and the human judgments were up to 0.8536, which is more than 10% above baseline, and almost as good as the best performing system which was at 0.8547 correlation (the difference is just about 0.1%). Also, our system produced leading results when evaluated with a separate STS benchmark dataset. The word alignment and sentence embeddings based features were found to be very effective.",
"pdf_parse": {
"paper_id": "S17-2014",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe our system (DT Team) submitted at SemEval-2017 Task 1, Semantic Textual Similarity (STS) challenge for English (Track 5). We developed three different models with various features including similarity scores calculated using word and chunk alignments, word/sentence embeddings, and Gaussian Mixture Model (GMM). The correlation between our system's output and the human judgments were up to 0.8536, which is more than 10% above baseline, and almost as good as the best performing system which was at 0.8547 correlation (the difference is just about 0.1%). Also, our system produced leading results when evaluated with a separate STS benchmark dataset. The word alignment and sentence embeddings based features were found to be very effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Measuring the Semantic Textual Similarity (STS) is to quantify the semantic equivalence between given pair of texts (Banjade et al., 2015; Agirre et al., 2015) . For example, a similarity score of 0 means that the texts are not similar at all while a score of 5 means that they have same meaning. In this paper, we describe our system DT Team and the three different runs that we submitted to this year's SemEval shared task on STS English track (Track 5; Agirre et al. (2017) ). We applied Support Vector Regression (SVR), Linear Regression (LR) and Gradient Boosting Regressor (GBR) with various features (see \u00a7 3.4) in order to predict the semantic similarity of texts in a given pair. We also report the results of our models when evaluated with a separate STS benchmark dataset created recently by the STS task organizers.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Banjade et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 139,
"end": 159,
"text": "Agirre et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 456,
"end": 476,
"text": "Agirre et al. (2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The preprocessing step involved tokenization, lemmatization, POS-tagging, name-entity recognition and normalization (e.g. pc, pct, % are normalized to pc). The preprocessing steps were same as our DTSim system .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2"
},
{
"text": "We generated various features including similarity scores generated using different methods. We describe next the word-to-word and sentence-tosentence similarity methods used in our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Generation",
"sec_num": "3"
},
{
"text": "We used the word2vec (Mikolov et al., 2013) 1 vectorial word representation, PPDB database (Pavlick et al., 2015) 2 , and WordNet (Miller, 1995) to compute similarity between words. Please see DTSim system description for additional details.",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 91,
"end": 121,
"text": "(Pavlick et al., 2015) 2 , and",
"ref_id": null
},
{
"start": 122,
"end": 144,
"text": "WordNet (Miller, 1995)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-to-Word Similarity",
"sec_num": "3.1"
},
{
"text": "We lemmatized all content words and aligned them optimally using the Hungarian algorithm (Kuhn, 1955) implemented in the SEMILAR Toolkit (Rus et al., 2013) . The process is the same as finding the maximum weight matching in a weighted bi-partite graph. The nodes are words and the weights are the similarity scores between the word pairs computed as described in \u00a7 3.1. In order to avoid noisy alignments, we reset the similarity score below 0.5 (empirically set threshold) to 0. The similarity score was computed as the sum of the scores for all aligned word-pairs divided by the total length of the given sentence pair.",
"cite_spans": [
{
"start": 89,
"end": 101,
"text": "(Kuhn, 1955)",
"ref_id": "BIBREF8"
},
{
"start": 137,
"end": 155,
"text": "(Rus et al., 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Method",
"sec_num": "3.2.1"
},
{
"text": "In some cases, we also applied a penalty for unaligned words which we describe in \u00a7 3.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Alignment Method",
"sec_num": "3.2.1"
},
{
"text": "We aligned chunks across sentence-pairs and labeled the alignments, such as Equivalent or Specific as described in . Then, we computed the interpretable semantic score as in the DTSim system .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpretable Similarity Method",
"sec_num": "3.2.2"
},
{
"text": "Similar to the GMM model we have proposed for assessing open-ended student answers (Maharjan et al., 2017), we represented the sentence pair as a feature vector consisting of feature sets {7, 8, 9, 10, 14} from \u00a7 3.4 and modeled the semantic equivalence levels [0 5] as multivariate Gaussian densities of feature vectors. We then used GMM to compute membership weights to each of these semantic levels for a given sentence pair. Finally, the GMM score is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Mixture Model Method",
"sec_num": "3.2.3"
},
{
"text": "mem wt i = w i N (x|\u00b5 i , i ), i \u2208 [0, 5] gmm score = 5 i=0 mem wt i * i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Mixture Model Method",
"sec_num": "3.2.3"
},
{
"text": "We used both Deep Structured Semantic Model (DSSM; Huang et al. (2013) ) and DSSM with convolutional-pooling (CDSSM; Shen et al. (2014); Gao et al. (2014)) in the Sent2vec tool 3 to generate the continuous vector representations for given texts. We then computed the similarity score as the cosine similarity of their representations.",
"cite_spans": [
{
"start": 51,
"end": 70,
"text": "Huang et al. (2013)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Compositional Sentence Vector Method",
"sec_num": "3.2.4"
},
{
"text": "We first obtained the continuous vector representations V A and V B for sentence pair A and B using the Sent2Vec DSSM or CDSSM models or skip-thought model 4 . Inspired by Tai et al. (2015) , we then represented the sentence pairs by the features formed by concatenating element-wise dot product",
"cite_spans": [
{
"start": 172,
"end": 189,
"text": "Tai et al. (2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tuned Sentence Representation Based Method",
"sec_num": "3.2.5"
},
{
"text": "V A .V B and absolute difference |V A \u2212 V B |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuned Sentence Representation Based Method",
"sec_num": "3.2.5"
},
{
"text": "We used these features in our logistic regression model which produces the outputp \u03b8 . Then, we predicted the similarity between the texts in the target pair as = r Tp \u03b8 , where r T = {1, 2, 3, 4, 5} is the ordinal scale of similarity. To enforce that\u0177 is close to the gold rating y, we encoded y as a sparse target distribution p such that y = r T p as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuned Sentence Representation Based Method",
"sec_num": "3.2.5"
},
{
"text": "p i = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 y \u2212 y , i = y + 1 y \u2212 y + 1, i = y 0, otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuned Sentence Representation Based Method",
"sec_num": "3.2.5"
},
{
"text": "where 1 \u2264 i \u2264 5 and, y is f loor operation. For instance, given y = 3.2, it would give sparse p = [0 0 0.8 0.2 0]. For building logistic model, we used training data set from our previous DTSim system and used image test data from STS-2014 and STS-2015 as validation data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tuned Sentence Representation Based Method",
"sec_num": "3.2.5"
},
{
"text": "We generated a vocabulary V of unique words from the given sentence pair (A, B). Then, we generated sentence vectors as in the followings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Vector Method",
"sec_num": "3.2.6"
},
{
"text": "V A = (w 1a , w 2a , ..w na ) and V B = (w 1b , w 2b , ...w nb )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Vector Method",
"sec_num": "3.2.6"
},
{
"text": ", where n = |V | and w ia = 1, if word i at position i in V has a synonym in sentence A. Otherwise, w ia is the maximum similarity between word i and any of the words in A, computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Vector Method",
"sec_num": "3.2.6"
},
{
"text": "w ia = max j=|A| j=1 sim(w j , word i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Vector Method",
"sec_num": "3.2.6"
},
{
"text": "The sim(w j , word i ) is cosine similarity score computed using the word2vec model. Similarly, we compute V B from sentence B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Vector Method",
"sec_num": "3.2.6"
},
{
"text": "We combined word2vec word representations to obtain sentence level representations through vector algebra. We weighted the word vectors corresponding to content words. We generated resultant vector for A as R A = i=|A| i=1 \u03b8 i * word i , where the weight \u03b8 i for word i was chosen as word i \u2208 {noun = 1.0, verb = 1.0, adj = 0.2, adv = 0.4, others (e.g. number) = 1.0}. Similarly, we computed resultant vector R B for text B. The weights were set empirically from training data. We then computed a similarity score as the cosine of R A and R B . Finally, we penalized the similarity score by the unalignment score (see \u00a7 3.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Resultant Vector Method",
"sec_num": "3.2.7"
},
{
"text": "We applied the following two penalization strategies to adjust the sentence-to-sentence similarity score. It should be noted that only certain similarity scores used as features of our regression models were penalized but we did not penalize the scores obtained from our final models. Unless specified, similarity scores were not penalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Penalty",
"sec_num": "3.3"
},
{
"text": "Crossing measures the spread of the distance between the aligned words in a given sentence pair. In most cases, sentence pairs with higher degree of similarity have aligned words in same position or its neighborhood. We define crossing crs as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crossing Score",
"sec_num": "3.3.1"
},
{
"text": "crs = w i \u2208A, w j \u2208B, aligned(w i ,w j ) |i \u2212 j| max(|A|, |B|) * (#alignments)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crossing Score",
"sec_num": "3.3.1"
},
{
"text": "where aligned(w i , w j ) refers to word w i at index i in A and w j at index j in B are aligned. Then, the similarity score was reset to 0.3 if crs > 0.7. The threshold 0.7 was empirically set based on evaluations using the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Crossing Score",
"sec_num": "3.3.1"
},
{
"text": "We define unalignment score similar to alignment score (see \u00a7 3.2.1) but this time the score is calculated using unaligned words in both A and B as: unalign score = |A|+|B|\u22122 * (#alignments)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unalignment Score",
"sec_num": "3.3.2"
},
{
"text": ". Then, the similarity score was penalized as in the followings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "score * = (1 \u2212 0.4 * unalign score) * score",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "where the weight 0.4 was empirically chosen.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "We generated and experimented with many features. We describe here only those features used directly or indirectly by our three submitted runs which we describe in \u00a7 4. We used word2vec representation and WordNet antonym and synonym for word similarity unless anything else is mentioned specifically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "1. {w2v wa, ppdb wa, ppdb wa pen ua}: similarity scores generated using word alignment based methods (pen ua for scores penalized by unalignment score).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "2. {gmm}: output of Gaussian Mixture Model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "3. {dssm, cdssm}: similarity scores using DSSM and CDSSM models (see \u00a7 3.2.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "4. {dssm lr, skipthought lr}: similarity scores using logistic model with sentence representations from DSSM and skip-thought models (see \u00a7 3.2.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "5. {sim vec}: score using similarity vector method (see \u00a7 3.2.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "6. {res vec}: score using the weighted resultant vector method (see \u00a7 3.2.7).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "7. {interpretable}: score calculated using interpretable similarity method ( \u00a7 3.2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "8. {noun wa, verb wa, adj wa, adv wa}: Noun-Noun, Adjective-Adjective, Adverb-Adverb, and Verb-Verb alignment scores using word2vec for word similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "9. {noun verb mult}: multiplication of Noun-Noun similarity scores and Verb-Verb similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "10. {abs dif f t}: absolute difference as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "|Cta\u2212C tb |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "Cta+C tb where C ta and C ta are the counts of tokens of type t \u2208 {all tokens, adjectives, adverbs, nouns, and verbs} in sentence A and B respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": "11. {overlap pen}: unigram overlap between text A and B with synonym check given by: score = 2 * overlap count",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection",
"sec_num": "3.4"
},
{
"text": ". Then penalized by crossing followed by unalignment score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "12. {noali}: number of NOALI relations in aligning chunks between texts relative to the total number of alignments (see \u00a7 3.2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "13. {align, unalign}: fraction of aligned/nonaligned words in the sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "14. {mmr t}: min to max ratio as C t1 C t2 where C t1 and C t2 are the counts of type t \u2208 {all, adjectives, adverbs, nouns, and verbs} for shorter text 1 and longer text 2 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|A|+|B|",
"sec_num": null
},
{
"text": "Training Data. We used data released in previous shared tasks (see Table 1 ) for the model development (see \u00a7 5 for STS benchmarking). Models and Runs. Using the combination of features described in \u00a7 3.4, we built three different models corresponding to the three runs (R1-3) submitted.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Development",
"sec_num": "4"
},
{
"text": "Linear SVM Regression model (SVR; = 0.1, C = 1.0) with a set of 7 features: overlap pen, ppdb wa pen ua, dssm, dssm lr, noali, abs dif f all tkns, mmr all tkns. R2. Linear regression model (LR; default weka settings) with a set of 8 features: dssm, cdssm, gmm, res vec, skipthought lr, sim vec, aligned, noun wa. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "R1.",
"sec_num": null
},
{
"text": ". Gradient boosted regression model (GBR; estimators = 1000, max depth = 3) which includes 3 additional features: w2v wa, ppdb wa, overlap to feature set used in Run 2. We used SVR and and LR models in Weka 3.6.8. We used GBR model using sklearn python library. We evaluated our models on training data using 10-fold cross validation. The correlation scores in the training data were 0.797, 0.816 and 0.845 for R1, R2, and R3, respectively. Table 2 presents the correlation (r) of our system outputs with human ratings in the evaluation data (250 sentence pairs from Stanford Natural Language Inference data (Bowman et al., 2015)). The correlation scores of all three runs are 0.83 or above, on par with top performing systems. All of our systems outperform the baseline by a large margin of above 10%. Interestingly, R1 system is at par with the 1 st ranked system differing by a very small margin of 0.009 (<0.2%). Figure 1 presents the graph showing R1 system output against human judgments (gold scores). It shows that our system predicts relatively better for similarity scores between 3 to 5 while the system slightly overshoots the prediction for the gold ratings in the range of 0 to 2. In general, it can be seen that our system works well across all similarity levels.",
"cite_spans": [],
"ref_spans": [
{
"start": 441,
"end": 448,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 917,
"end": 923,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "R3",
"sec_num": null
},
{
"text": "Our 11 features had a correlation of 0.75 or dssm (0.8254), ppdb wa pen ua (0.8273), ppdb wa (0.8139), cdssm (0.8013), dssm lr (0.8135), overlap (0.8048) Table 3 : A set of highly correlated features with gold scores in test data. above when compared with gold scores in test data. In Table 3 , we list only those features having correlations of 0.8 or above. Similarity scores computed using word alignment and compositional sentence vector methods were the best predictive features. STS Benchmark (Agirre et al., 2017) . We also evaluated our models on a benchmark dataset which consists of 1379 pairs and was created by the task organizers. We trained our three runs with the benchmark training data under identical settings. We used benchmark development data only for generating features from \u00a7 3.2.5 (as validation dataset). The correlation scores for R1, R2 and R3 systems were:",
"cite_spans": [
{
"start": 499,
"end": 520,
"text": "(Agirre et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 154,
"end": 161,
"text": "Table 3",
"ref_id": null
},
{
"start": 285,
"end": 292,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In Dev: 0.800, 0.822, 0.830 and In Test: 0.755, 0.787, 0.792 All of our systems outperformed best baseline benchmark system (Dev = 0.77, Test = 0.72). Interestingly, R3 was the best performing while R1 was the least performing among the three. As such, generalization was found to improve with increasing number of features (#features: 7, 8 and 11 for R1, R2 and R3 respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We presented our DT Team system submitted in SemEval-2017 Task 1. We developed three different models using SVM regression, Linear regression and Gradient Boosted regression for predicting textual semantic similarity. Overall, the outputs of our models highly correlate (correlation up to 0.85 in STS 2017 test data and up to 0.792 on benchmark data) with human ratings. Indeed, our methods yielded highly competitive results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://code.google.com/p/word2vec/ 2 http://www.cis.upenn.edu/ ccb/ppdb/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.microsoft.com/enus/download/details.aspx?id=52365 4 https://github.com/ryankiros/skip-thoughts\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": ",",
"middle": [],
"last": "Eneko Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diabe",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpioa",
"suffix": ""
},
{
"first": "Specia",
"middle": [],
"last": "Lucia",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, , Daniel Cer, Mona Diabe, , Inigo Lopez-Gazpioa, and Specia Lucia. 2017. Semeval- 2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Baneab",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardiec",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diabe",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirrea",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guof",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpioa",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Maritxalara",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalceab",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "252--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Baneab, Claire Cardiec, Daniel Cer, Mona Diabe, Aitor Gonzalez-Agirrea, Weiwei Guof, Inigo Lopez-Gazpioa, Montse Maritxalara, Rada Mihalceab, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th international workshop on semantic evaluation (Se- mEval 2015). pages 252-263.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dtsim at semeval-2016 task 1: Semantic similarity model including multi-level alignment and vector-based compositional semantics",
"authors": [
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "Nabin",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Dipesh",
"middle": [],
"last": "Gautam",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Rus",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of SemEval pages",
"volume": "",
"issue": "",
"pages": "640--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajendra Banjade, Nabin Maharjan, Dipesh Gautam, and Vasile Rus. 2016. Dtsim at semeval-2016 task 1: Semantic similarity model including multi-level alignment and vector-based compositional seman- tics. Proceedings of SemEval pages 640-644.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Nerosim: A system for measuring and interpreting semantic textual similarity",
"authors": [
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "Nabin",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Rus",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Stefanescu",
"suffix": ""
},
{
"first": "Dipesh",
"middle": [],
"last": "Lintean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gautam",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "164--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajendra Banjade, Nobal B Niraula, Nabin Mahar- jan, Vasile Rus, Dan Stefanescu, Mihai Lintean, and Dipesh Gautam. 2015. Nerosim: A system for mea- suring and interpreting semantic textual similarity. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). pages 164- 171.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.05326"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Modeling interestingness with deep neural networks",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Li Deng, Michael Gamon, Xiaodong He, and Patrick Pantel. 2014. Modeling interest- ingness with deep neural networks. US Patent App. 14/304,863.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning deep structured semantic models for web search using clickthrough data",
"authors": [
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management",
"volume": "",
"issue": "",
"pages": "2333--2338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Conference on informa- tion & knowledge management. ACM, pages 2333- 2338.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Raquel Urtasun, and Sanja Fidler",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Richard",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2015,
"venue": "Skip-thought vectors",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.06726"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S Zemel, Antonio Torralba, Raquel Urta- sun, and Sanja Fidler. 2015. Skip-thought vectors. arXiv preprint arXiv:1506.06726 .",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The hungarian method for the assignment problem",
"authors": [
{
"first": "",
"middle": [],
"last": "Harold W Kuhn",
"suffix": ""
}
],
"year": 1955,
"venue": "Naval research logistics quarterly",
"volume": "2",
"issue": "1-2",
"pages": "83--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harold W Kuhn. 1955. The hungarian method for the assignment problem. Naval research logistics quar- terly 2(1-2):83-97.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semaligner: A method and tool for aligning chunks with semantic relation types and semantic similarity scores",
"authors": [
{
"first": "Nabin",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rus",
"suffix": ""
}
],
"year": 2016,
"venue": "CRF",
"volume": "82",
"issue": "",
"pages": "62--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabin Maharjan, Rajendra Banjade, Nobal B Niraula, and Vasile Rus. 2016. Semaligner: A method and tool for aligning chunks with semantic relation types and semantic similarity scores. CRF 82:62-56.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Automated assessment of open-ended student answers in tutorial dialogues using gaussian mixture models",
"authors": [
{
"first": "Nabin",
"middle": [],
"last": "Maharjan",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "Vasile",
"middle": [],
"last": "Rus",
"suffix": ""
}
],
"year": 2017,
"venue": "FLAIRS Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nabin Maharjan, Rajendra Banjade, and Vasile Rus. 2017. Automated assessment of open-ended student answers in tutorial dialogues using gaussian mixture models (in press). In FLAIRS Conference.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM 38(11):39- 41.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Ppdb 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. Ppdb 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semilar: The semantic similarity toolkit",
"authors": [
{
"first": "",
"middle": [],
"last": "Vasile Rus",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Mihai",
"suffix": ""
},
{
"first": "Rajendra",
"middle": [],
"last": "Lintean",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Banjade",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Nobal",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Niraula",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stefanescu",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (Conference System Demonstrations). Citeseer",
"volume": "",
"issue": "",
"pages": "163--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasile Rus, Mihai C Lintean, Rajendra Banjade, Nobal B Niraula, and Dan Stefanescu. 2013. Semi- lar: The semantic similarity toolkit. In ACL (Confer- ence System Demonstrations). Citeseer, pages 163- 168.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A latent semantic model with convolutional-pooling structure for information retrieval",
"authors": [
{
"first": "Yelong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "101--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr\u00e9goire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM Inter- national Conference on Conference on Information and Knowledge Management. ACM, pages 101- 110.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improved semantic representations from tree-structured long short-term memory networks",
"authors": [
{
"first": "Kai Sheng",
"middle": [],
"last": "Tai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00075"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075 .",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1506.06724"
]
},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watch- ing movies and reading books. arXiv preprint arXiv:1506.06724 .",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "R1 system output in evaluation data plotted against human judgments (in ascending order).",
"num": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results of our submitted runs on test data (1 st is the best result among the participants)."
}
}
}
}