Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S15-2007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:37:55.546482Z"
},
"title": "ROB: Using Semantic Meaning to Recognize Paraphrases",
"authors": [
{
"first": "Rob",
"middle": [],
"last": "Van Der Goot",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Gertjan",
"middle": [],
"last": "Van Noord",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Groningen",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Paraphrase recognition is the task of identifying whether two pieces of natural language represent similar meanings. This paper describes a system participating in the shared task 1 of SemEval 2015, which is about paraphrase detection and semantic similarity in twitter. Our approach is to exploit semantically meaningful features to detect paraphrases. An existing state-of-the-art model for predicting semantic similarity is adapted to this task. A wide variety of features is used, ranging from different types of models, to lexical overlap and synset overlap. A maximum entropy classifier is then trained on these features. In addition to the detection of paraphrases, a similarity score is also predicted, using the probabilities of the classifier. To improve the results, normalization is used as preprocessing step. Our final system achieves a F1 score of 0.620 (10th out of 18 teams), and a Pearson correlation of 0.515 (6th out of 13 teams).",
"pdf_parse": {
"paper_id": "S15-2007",
"_pdf_hash": "",
"abstract": [
{
"text": "Paraphrase recognition is the task of identifying whether two pieces of natural language represent similar meanings. This paper describes a system participating in the shared task 1 of SemEval 2015, which is about paraphrase detection and semantic similarity in twitter. Our approach is to exploit semantically meaningful features to detect paraphrases. An existing state-of-the-art model for predicting semantic similarity is adapted to this task. A wide variety of features is used, ranging from different types of models, to lexical overlap and synset overlap. A maximum entropy classifier is then trained on these features. In addition to the detection of paraphrases, a similarity score is also predicted, using the probabilities of the classifier. To improve the results, normalization is used as preprocessing step. Our final system achieves a F1 score of 0.620 (10th out of 18 teams), and a Pearson correlation of 0.515 (6th out of 13 teams).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A good paraphrase detection system can be useful in many natural language processing tasks, like searching, translating or summarization. For clean texts, F1 scores as high as 0.84 have been reported on paraphrase detection (Madnani et al., 2012) .",
"cite_spans": [
{
"start": 224,
"end": 246,
"text": "(Madnani et al., 2012)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, previous research focused almost solely on clean text. Thanks to the Twitter Paraphrase Corpus (Xu et al., 2014) , this has now changed. Carrying out this task on noisy texts is a new challenge. The abundant availability of social media data and the high redundancy that naturally exists in this data makes this task highly relevant (Zanzotto et al., 2011) .",
"cite_spans": [
{
"start": 104,
"end": 121,
"text": "(Xu et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 342,
"end": 365,
"text": "(Zanzotto et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is based on the model described by Bjerva et al. (2014) . This model has proved to achieve state-of-the-art results at predicting semantic similarity (Marelli et al., 2014) . It is based on overlaps of semantically meaningful properties of sentences. A random forest regression model (Breiman, 2001 ) combines these features to predict a semantic similarity score. We rely heavily on the assumption that semantically meaningful features can also be used to identify paraphrases.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "Bjerva et al. (2014)",
"ref_id": "BIBREF0"
},
{
"start": 163,
"end": 185,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF10"
},
{
"start": 297,
"end": 311,
"text": "(Breiman, 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The features of the existing system are also used in the new system. However, the old system used a regression model, while the new task demands classbased output. Hence, the machine learning model model is changed to a maximum entropy model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Twitter Paraphrase Corpus consists of two distinct parts, the training data differs significantly from the test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The 17,790 tweet pairs for training are collected between April 24th and May 3rd, 2014. These tweets are selected based on the trending topics of that period. Annotation of the training data is done by human annotators from Amazon Mechanical Turk. Every sentence pair is annotated by 5 different annotators, resulting in a score of 0-5. Based on this score we create a binary paraphrase judgement. If 0, 1 or 2 annotators judged positively, we treat the sentence pair as not being a paraphrase, for 3, 4 or 5 positive judgements we treat the sentence pair as a paraphrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The test data is collected between May 13th and June 10th, and is thus based on different trending topics. This assures the integrity of the evaluation. In contrast to the training data, this data is annotated by an expert similarity rating on a 5-point Likert scale (Likert, 1932) , to mimic the training data. Sentence pairs with a similarity score of 0, 1 and 2 are considered non-paraphrases, and sentence pairs with scores of 4 and 5 are considered paraphrases. The one uncertain category (similarity score of 3) is discarded in the evaluation.",
"cite_spans": [
{
"start": 267,
"end": 281,
"text": "(Likert, 1932)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Using this data, we end up with two different types of gold data per sentence pair. Firstly, we have the binary gold data that indicates if a sentence pair is a paraphrase. Secondly, we have the raw annotations that can be used as a similarity score. These annotations are normalized by dividing them by their maximum score (5), so we end up with 0.0, 0.2, 0.4, 0.6, 0.8, 1.0 as possible similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The tweets in the corpus are already tokenized using TweetMotif (O'Connor et al., 2010) . Additionally, Part Of Speech (POS) tags are provided by a tagger that is adapted to twitter (Derczynski et al., 2013) . Named entity tags are also obtained from an adapted tagger (Ritter et al., 2011) .",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "(O'Connor et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 182,
"end": 207,
"text": "(Derczynski et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 269,
"end": 290,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "The model is based on a state-of-the-art semantic similarity prediction model (Bjerva et al., 2014) . It is mainly based on overlap features extracted from different parsers, but also includes synset overlap, and a Compositional Distributional Semantic Model (CDSM). The parsers used in this model are a constituency parser (Steedman, 2001) , logical parser Paradox (Claessen and S\u00f6rensson, 2003) and the DRS parser Boxer (Bos, 2008) .",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "(Bjerva et al., 2014)",
"ref_id": "BIBREF0"
},
{
"start": 324,
"end": 340,
"text": "(Steedman, 2001)",
"ref_id": "BIBREF14"
},
{
"start": 366,
"end": 396,
"text": "(Claessen and S\u00f6rensson, 2003)",
"ref_id": "BIBREF4"
},
{
"start": 422,
"end": 433,
"text": "(Bos, 2008)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "Our model uses 25 features in total. Due to space constraints we cannot describe them all in detail here. Instead we group the features as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 Lexical features: word overlap, proportional sentence length difference. \u2022 POS: noun overlap, verb overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 Logical model: instance overlap, relation overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 DRS: agent overlap, patient overlap, DRS complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 Entailments: binary features for: neutral, entailment and contradiction predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 CDSM: The cosine distance between the element wise addition of the vectors in each sentence is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 Synsets (WordNet): The distance of the closest synsets of each word in both sentences, and the distance between the noun synsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "\u2022 Named entity: overlap between named entities 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "For a complete detailed overview we refer to the paper describing the semantic similarity system (Bjerva et al., 2014) , or for even more detail (van der Goot, 2014).",
"cite_spans": [
{
"start": 97,
"end": 118,
"text": "(Bjerva et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.1"
},
{
"text": "We will compare two different maximum entropy models. The maximum entropy implementation of Scikit-Learn (Pedregosa et al., 2011) is used.",
"cite_spans": [
{
"start": 105,
"end": 129,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Models",
"sec_num": "3.2"
},
{
"text": "The first maximum entropy model is a binary model that also outputs a probability. From this model, the normal binary output is not used, instead we use the estimated probability that something is a paraphrase. Using this value, we can set our own threshold to have more control on the final output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Models",
"sec_num": "3.2"
},
{
"text": "The second maximum entropy model is a multiclass model. This classifier is based on the 6 different classes in our data, and thus outputs 6 probabilities. We use the similarity score of each class as weight to convert all probabilities to one probability. For each class we multiply the similarity score with the probability that our model predicts for this class. The results of the 6 classes are then summed to get a single probability. This classification model uses more specific training data, thus it should have a more precise output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximum Entropy Models",
"sec_num": "3.2"
},
{
"text": "A normalization approach very similar to that described by Han et al. (2013) is used to try to improve the parses. This normalization consists of three steps. The first step is to decide which tokens might need a correction, this is decided by a dictionary lookup in the Aspell dictionary 2 .",
"cite_spans": [
{
"start": 59,
"end": 76,
"text": "Han et al. (2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3"
},
{
"text": "The second step is the generation of possible corrections for every misspelled word. For this, the Aspell source code is adapted to lower the costs of deletion in its algorithm, because we assume words are often typed in an abbreviated form in this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3"
},
{
"text": "The last step is the ranking of the candidates. Here we use a different approach than the traditional approach. Instead of using a static formula to predict the probability of each candidate, we want to use a more flexible approach. Google N-gram probabilities (Brants and Franz, 2006) , Aspell scores and dictionary lookups are combined using logistic regression. To adjusts the weights of the regression model, 200 sentences are normalized manually. The resulting model is then applied to all the other sentences.",
"cite_spans": [
{
"start": 261,
"end": 285,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3"
},
{
"text": "This normalization approach does not reach a perfect accuracy, and normalizing a sentence might remove meaningful information. So instead of using the normalization as straightforward pre processing of the data, we use the raw and the normalized sentence in the model. For each feature, scores are calculated for both versions of the sentence. The highest of these scores be used as input for our maximum entropy model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "3.3"
},
{
"text": "This chapter is divided in the two sub tasks of paraphrase detection and similarity prediction. A strong 2 www.aspell.net baseline is used, namely a state-of-the art model for clean text: a logistic regression model that uses simple lexical overlap features (Das and Smith, 2009) .",
"cite_spans": [
{
"start": 258,
"end": 279,
"text": "(Das and Smith, 2009)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The evaluation is done on expert annotations, which are only available for the test set. The binary and multi-class classifiers are evaluated separately. Additionally, we also tried to improve the system by using normalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Detection",
"sec_num": "4.1"
},
{
"text": "The precision and recall of both classifiers is plotted in Figure 1 . In this graph the differences are barely visible, therefore it looks like both models are approximately equal.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Paraphrase Detection",
"sec_num": "4.1"
},
{
"text": "If we look at the F-scores of Figure 2 , the differences are bigger. The highest F-scores of both classifiers are 0.604 and 0.610 for respectively the binary and the multi-class classifier. Both classifiers outperform the baseline F-score of 0.583.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Paraphrase Detection",
"sec_num": "4.1"
},
{
"text": "These graphs also show that the default output of the binary deos not perform well, so it is really necessary to use the probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrase Detection",
"sec_num": "4.1"
},
{
"text": "We use the same grouping for features as in 3.1. The absolute weights of all features within each group are summed. For the multi-class classifier the weights are averaged over all 6 classes. Also an ablation experiment is done. An overview this evaluation is shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Comparisons",
"sec_num": "4.1.1"
},
{
"text": "In the ablation experiments we see that it is not always better to use more features. Especially the logical model should be left out in the multi-class entropy model. The models differ in some aspects, whereas some features are important for both. More specifically, we can see that the parsers outputs and lexical features are more important for the multiclass model, while the other features are more important for the binary model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Comparisons",
"sec_num": "4.1.1"
},
{
"text": "After the normalization of the sentences, we run the systems again. These runs are not plotted in the graphs, because the differences are small. Despite the small differences, there is one little performance boost on the top-runs of the multi-class classifiers, resulting in the highest F-score of 0.62.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalization",
"sec_num": "4.1.2"
},
{
"text": "Even though we do not have real semantic similarity training data, we simulate semantic similarity using the amount of the positive judgements per sentence pair. Our system is evolved from a semantic similarity prediction system, so this model should work well for this task. The Pearson correlation between the different annotations of experts (test) and crowdsourcing (training) is 0.735.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Prediction",
"sec_num": "4.2"
},
{
"text": "For this sub task we will also try different heuristics using both our classifiers. We start with the multi-class classifier, because it is trained to give back a similarity score. The model produces probabilities for each class, the class with the highest probability is used as output. We call this the Highest P method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Prediction",
"sec_num": "4.2"
},
{
"text": "Another model can be built using the predicted Baseline Highest P Binary P Weighted R 0.511 0.416 0.508 0.515 Table 2 : Pearson correlation (R) for the different similarity prediction approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Similarity Prediction",
"sec_num": "4.2"
},
{
"text": "weights, similar to section 3.2. We refer to this as the Weighted method. Besides the multi-class classifier, we also trained a binary classifier. The only way for this classifier to output a degree score, is using the probability. This is called Binary P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Similarity Prediction",
"sec_num": "4.2"
},
{
"text": "Only the weighted method beats the baseline. Results of all three approaches and the baseline can be found in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Similarity Prediction",
"sec_num": "4.2"
},
{
"text": "The main conclusion to draw from these experiments is that by using deep semantic features, we can achieve a maximum F-score of 0.61 on the paraphrase detection task. By using normalization we can improve this F-score to 0.62.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Following from this, it is safe to conclude that a semantic similarity prediction system can be used in paraphrase detection reasonably well. Our system had an average result on this shared task (10th out of 18 teams) 3 . The advantage of this system is that it can be created easily from existing tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Unsurprisingly, the results on the semantic similarity task were better (6th out of 13 teams). Even though the gold data does not represent a real semantic similarity, but a scale of positive annotations of the paraphrase detection task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "The source code of our system has been made publicly available 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "This is the only feature not present in the original semantic similarity system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://alt.qcri.org/semeval2015/task1 4 https://bitbucket.org/robvanderg/sem15",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This paper is part of the 'Parsing Algorithms for Uncertain Input' project, supported by the Nuance Foundation.We would like to thank the organizers of the shared task (Xu et al., 2015) . Additionally, we would also like to thank the anonymous reviewers and Johannes Bjerva for the valuable feedback on this paper.",
"cite_spans": [
{
"start": 168,
"end": 185,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Meaning Factory: Formal semantics for recognizing textual entailment and determining semantic similarity",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2014,
"venue": "ternational Workshop on Semantic Evaluation",
"volume": "2014",
"issue": "",
"pages": "642--646",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. 2014. The Meaning Factory: For- mal semantics for recognizing textual entailment and determining semantic similarity. SemEval 2014: In- ternational Workshop on Semantic Evaluation, pages 642-646.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Wide-coverage semantic analysis with Boxer",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Semantics in Text Processing",
"volume": "",
"issue": "",
"pages": "277--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bos. 2008. Wide-coverage semantic analysis with Boxer. In Proceedings of the 2008 Conference on Se- mantics in Text Processing, pages 277-286.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web 1T5-gram corpus version 1.1",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T5-gram corpus version 1.1.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Random forests. Machine learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine learning, 45(1):5-32.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "New techniques that improve MACE-style finite model finding",
"authors": [
{
"first": "Koen",
"middle": [],
"last": "Claessen",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "S\u00f6rensson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the CADE-19 Workshop: Model Computation-Principles, Algorithms, Applications",
"volume": "",
"issue": "",
"pages": "11--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koen Claessen and Niklas S\u00f6rensson. 2003. New tech- niques that improve MACE-style finite model find- ing. In Proceedings of the CADE-19 Workshop: Model Computation-Principles, Algorithms, Applica- tions, pages 11-27.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Paraphrase identification as probabilistic quasi-synchronous recognition",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "468--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das and Noah A Smith. 2009. Paraphrase iden- tification as probabilistic quasi-synchronous recogni- tion. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th Inter- national Joint Conference on Natural Language Pro- cessing of the AFNLP: Volume 1-Volume 1, pages 468- 476.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Twitter part-of-speech tagging for all: Overcoming sparse and noisy data",
"authors": [
{
"first": "Leon",
"middle": [],
"last": "Derczynski",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kalina",
"middle": [],
"last": "Bontcheva",
"suffix": ""
}
],
"year": 2013,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "198--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leon Derczynski, Alan Ritter, Sam Clark, and Kalina Bontcheva. 2013. Twitter part-of-speech tagging for all: Overcoming sparse and noisy data. In RANLP, pages 198-206.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Lexical normalization for social media text",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2013,
"venue": "ACM Transactions on Intelligent Systems and Technology",
"volume": "4",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2013. Lex- ical normalization for social media text. ACM Trans- actions on Intelligent Systems and Technology (TIST), 4(1):5.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A technique for the measurement of attitudes. Archives of psychology",
"authors": [
{
"first": "Rensis",
"middle": [],
"last": "Likert",
"suffix": ""
}
],
"year": 1932,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Re-examining machine translation metrics for paraphrase identification",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Tetreault",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "182--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Lan- guage Technologies, pages 182-190.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zampar- elli. 2014. Semeval-2014 task 1: Evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. SemEval-2014.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tweetmotif: Exploratory search and topic summarization for Twitter",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Krieger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ahn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan O'Connor, Michel Krieger, and David Ahn. 2010. Tweetmotif: Exploratory search and topic sum- marization for Twitter.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Scikit-learn: Machine learning in Python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Named entity recognition in tweets: an experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1524--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Oren Etzioni, et al. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1524-1534.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic estimation of semantic relatedness for sentences using machine learning",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2001. The Syntactic Process. Rob van der Goot. 2014. Automatic estimation of se- mantic relatedness for sentences using machine learn- ing. Master's thesis, University of Groningen.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Extracting lexically divergent paraphrases from Twitter",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "435--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Alan Ritter, Chris Callison-Burch, William B Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from Twitter. Transactions of the Association for Computational Linguistics, 2:435- 448.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "SemEval-2015 Task 1: Paraphrase and semantic similarity in Twitter (PIT)",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Chris Callison-Burch, and William B. Dolan. 2015. SemEval-2015 Task 1: Paraphrase and semantic similarity in Twitter (PIT). In Proceedings of the 9th International Workshop on Semantic Evaluation (Se- mEval).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Linguistic redundancy in Twitter",
"authors": [
{
"first": "Fabio",
"middle": [
"Massimo"
],
"last": "Zanzotto",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Kostas",
"middle": [],
"last": "Tsioutsiouliklis",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "659--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Massimo Zanzotto, Marco Pennacchiotti, and Kostas Tsioutsiouliklis. 2011. Linguistic redun- dancy in Twitter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 659-669.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Precision and recall for the different classifiers.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "F-Score for the different classifiers. P is the threshold that decides if a sentence pair is a paraphrase.",
"uris": null
}
}
}
}