Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-2011",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:35.736435Z"
},
"title": "AI-KU: Using Co-Occurrence Modeling for Semantic Similarity",
"authors": [
{
"first": "Osman",
"middle": [],
"last": "Ba\u015fkaya",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Laboratory",
"institution": "Ko\u00e7 University",
"location": {
"settlement": "Istanbul",
"country": "Turkey"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe our unsupervised method submitted to the Cross-Level Semantic Similarity task in Semeval 2014 that computes semantic similarity between two different sized text fragments. Our method models each text fragment by using the cooccurrence statistics of either occurred words or their substitutes. The co-occurrence modeling step provides dense, low-dimensional embedding for each fragment which allows us to calculate semantic similarity using various similarity metrics. Although our current model avoids the syntactic information, we achieved promising results and outperformed all baselines.",
"pdf_parse": {
"paper_id": "S14-2011",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe our unsupervised method submitted to the Cross-Level Semantic Similarity task in Semeval 2014 that computes semantic similarity between two different sized text fragments. Our method models each text fragment by using the cooccurrence statistics of either occurred words or their substitutes. The co-occurrence modeling step provides dense, low-dimensional embedding for each fragment which allows us to calculate semantic similarity using various similarity metrics. Although our current model avoids the syntactic information, we achieved promising results and outperformed all baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semantic similarity is a measure that specifies the similarity of one text's meaning to another's. Semantic similarity plays an important role in various Natural Language Processing (NLP) tasks such as textual entailment (Berant et al., 2012) , summarization (Lin and Hovy, 2003) , question answering (Surdeanu et al., 2011 ), text classification (Sebastiani, 2002) , word sense disambiguation (Sch\u00fctze, 1998) and information retrieval (Park et al., 2005) .",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Berant et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 259,
"end": 279,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF8"
},
{
"start": 301,
"end": 323,
"text": "(Surdeanu et al., 2011",
"ref_id": "BIBREF18"
},
{
"start": 347,
"end": 365,
"text": "(Sebastiani, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 394,
"end": 409,
"text": "(Sch\u00fctze, 1998)",
"ref_id": "BIBREF16"
},
{
"start": 436,
"end": 455,
"text": "(Park et al., 2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are three main approaches to computing the semantic similarity between two text fragments. The first approach uses Vector Space Models (see Turney & Pantel (2010) for an overview) where each text is represented as a bag-of-word model. The similarity between two text fragments can then be computed with various metrics such as cosine similarity. Sparseness in the input nature is the key problem for these models. Therefore, later works such as Latent Semantic Indexing (?) and",
"cite_spans": [
{
"start": 146,
"end": 168,
"text": "Turney & Pantel (2010)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ Topic Models (Blei et al., 2003) overcome sparsity problems via reducing the dimensionality of the model by introducing latent variables. The second approach blends various lexical and syntactic features and attacks the problem through machine learning models. The third approach is based on word-to-word similarity alignment (Pilehvar et al., 2013; Islam and Inkpen, 2008) .",
"cite_spans": [
{
"start": 225,
"end": 244,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 538,
"end": 561,
"text": "(Pilehvar et al., 2013;",
"ref_id": "BIBREF14"
},
{
"start": 562,
"end": 585,
"text": "Islam and Inkpen, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Cross-Level Semantic Similarity (CLSS) task in SemEval 2014 1 (Jurgens et al., 2014) provides an evaluation framework to assess similarity methods for texts in different volumes (i.e., lexical levels). Unlike previous SemEval and *SEM tasks that were interested in comparing texts with similar volume, this task consists of four subtasks (para-graph2sentence, sentence2phrase, phrase2word and word2sense) that investigate the performance of systems based on pairs of texts of different sizes. A system should report the similarity score of a given pair, ranging from 4 (two items have very similar meanings and the most important ideas, concepts, or actions in the larger text are represented in the smaller text) to 0 (two items do not mean the same thing and are not on the same topic).",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "(Jurgens et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our two unsupervised systems that are based on co-occurrence statistics of words. The only difference between the systems is the input they use. The first system uses the words directly (after lemmatization, stop-word removal and excluding the non-alphanumeric characters) in text while the second system utilizes the most likely substitutes consulted by a 4-gram language model for each observed word position (i.e., context). Note that we participated two subtasks which are paragraph2sentence and sentence2phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper proceeds as follows. Section 2 explains the preprocessing part, the difference between the systems, co-occurrence modeling, and how we calculate the similarity between two texts after co-occurrence modeling has been done. Section 3 discusses the results of our systems and compares them to other participants'. Section 4 discusses the findings and concludes with plans for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section explains preprocessing steps of the data and the details of our two systems 2 . Both systems rely on the co-occurrence statistics. The slight difference between the two is that the first one uses the words that occur in the given text fragment (e.g., paragraph, sentence), whereas the latter employs co-occurrence statistics on 100 substitute samples for each word within the given text fragment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "2"
},
{
"text": "Two AI-KU systems can be distinguished by their inputs. One uses the raw input words, whereas the other uses words' likely substitutes according to a language model. AI-KU 1 : This system uses the words that were in the text. All words are transformed into lowercase equivalents. Lemmatization 3 and stop-word removal were performed, and non-alphanumeric characters were excluded. Table 1 displays the pairs for the following sentence which is an instance from paragraph2sentence test set:",
"cite_spans": [],
"ref_spans": [
{
"start": 381,
"end": 388,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "\"Choosing what to buy with a $35 gift card is a hard decision.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "Note that the input that we used to model cooccurrence statistics consists of all such pairs for each fragment in a given subtask. 2 The code to replicate our work can be found at https://github.com/osmanbaskaya/ semeval14-task3.",
"cite_spans": [
{
"start": 131,
"end": 132,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "3 Lemmatization is carried out with Stanford CoreNLP and transforms a word into its canonical or base form. AI-KU 2 : Previously, the utilization of high probability substitutes and their co-occurrence statistics achieved notable performance on Word Sense Induction (WSI) (Baskaya et al., 2013) and Partof-Speech Induction (Yatbaz et al., 2012) problems. AI-KU 2 represents each context of a word by finding the most likely 100 substitutes suggested by the 4-gram language model we built from ukWaC 4 (Ferraresi et al., 2008) , a 2-billion word web-gathered corpus. Since S-CODE algorithm works with discrete input, for each context we sample 100 substitute words with replacement using their probabilities. Table 2 illustrates the context and substitutes of each context using a bigram language model. No lemmatization, stop-word removal and lowercase transformation were performed.",
"cite_spans": [
{
"start": 272,
"end": 294,
"text": "(Baskaya et al., 2013)",
"ref_id": "BIBREF0"
},
{
"start": 501,
"end": 525,
"text": "(Ferraresi et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 708,
"end": 715,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "2.1"
},
{
"text": "This subsection will explain the unsupervised method we employed to model co-occurrence statistics: the Co-occurrence data Embedding (CODE) method (Globerson et al., 2007) and its spherical extension (S-CODE) proposed by Maron et al. (2010) . Unlike in our WSI work, where we ended up with an embedding for each word in the co-occurrence modeling step in this task, we model each text unit such as a paragraph, a sentence or a phrase, to obtain embeddings for each instance.",
"cite_spans": [
{
"start": 147,
"end": 171,
"text": "(Globerson et al., 2007)",
"ref_id": "BIBREF4"
},
{
"start": 221,
"end": 240,
"text": "Maron et al. (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Occurrence Modeling",
"sec_num": "2.2"
},
{
"text": "Input data for S-CODE algorithm consist of instanceid and each word in the text unit for the first system (Table 1 illustrates the pairs for only one text fragment) instance-ids and 100 substitute samples of each word in text for the second system. In the initial step, S-CODE puts all instance-ids and words (or substitutes, depending on the system) randomly on an n-dimensional sphere. If two different instances have the same word or substitute, then these two instances attract one another -otherwise they repel each other. When S-CODE converges, instances that have similar words or substitutes will be closely located or else, they will be distant from each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 106,
"end": 114,
"text": "(Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Co-Occurrence Modeling",
"sec_num": "2.2"
},
{
"text": "AI-KU 1 : According to the training set performances for various n (i.e., number of dimensions for S-CODE algorithm), we picked 100 for both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Co-Occurrence Modeling",
"sec_num": "2.2"
},
{
"text": "We picked n to be 200 and 100 for paragraph2sentence and sentence2phrase subtasks, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AI-KU 2 :",
"sec_num": null
},
{
"text": "Substitutes the <s> dog The (0.12), A (0.11), If (0.02), As (0.07), Stray (0.001),..., w n (0.02) dog the cat (0.007), dog (0.005), animal (0.002), wolve (0.001), ..., w n (0.01) bites dog . runs (0.14), bites (0.13), catches (0.04), barks (0.001), ..., w n (0.01) Since this step is unsupervised, we tried to enrich the data with ukWaC, however, enrichment with ukWaC did not work well on the training data. To this end, proposed scores were obtained using only the training and the test data provided by organizers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "When the S-CODE converges, there is an n-dimensional embedding for each textual level (e.g., paragraph, sentence, phrase) instance. We can use a similarity metric to calculate the similarity between these embeddings. For this task, systems should report only the similarity between two specific cross level instances. Note that we used cosine similarity to calculate similarity between two textual units. This similarity is the eventual similarity for two instances; no further processing (e.g., scaling) has been done.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Calculation",
"sec_num": "2.3"
},
{
"text": "In this task, two correlation metrics were used to evaluate the systems: Pearson correlation and Spearman's rank correlation. Pearson correlation tests the degree of similarity between the system's similarity ratings and the gold standard ratings. Spearman's rank correlation measures the degree of similarity between two rankings; similarity ratings provided by a system and the gold standard ratings. Tables 3 and 4 show the scores for Paragraph-2-Sentence and Sentence-2-Phrase subtasks on the training data, respectively. These tables contain the best individual scores for the performance metrics, Normalized Longest Common Substring (LCS) baseline, which was given by task organizers, and three additional baselines: lin (Lin, 1998) , lch (Leacock and Chodorow, 1998) , and the Jaccard Index (JI) baseline. lin uses the information content (Resnik, 1995) of the least common subsumer of concepts A and B. Information content (IC) indicates the specificity of a concept; the least common subsumer of a concept A and B is the most specific concept from which A and B are inherited. lin similarity 5 returns the difference between two times of the IC of the least common subsumer of A and B, and the sum of IC of both concepts. On the other hand, lch is a score denoting how similar two concepts are, calculated by using the shortest path that connects the concept and the maximum depth of the taxonomy in which the concepts occur 6 (please see Pedersen et al. (2004) for further details of these measures). These two baselines were calculated as follows. ford Part-of-Speech Tagger (Toutanova and Manning, 2000) we tagged words across all textual levels. After tagging, we found the synsets of each word matched with its part-of-speech using Word-Net 3.0 (Miller and Fellbaum, 1998) . For each synset of a word in the shorter textual unit (e.g., sentence is shorter than paragraph), we calculated the lin/lch measure of each synset of all words in the longer textual unit and picked the highest score. When we found the scores for all words, we calculated the mean to find out the similarity between one pair in the test set. Finally, Jaccard Index baseline was used to simply calculate the number of words in common (intersection) with two cross textual levels, normalized by the total number of words (union). Table 5 and 6 demonstrate the AI-KU runs on the test data. Next, we present our results pertaining to the test data.",
"cite_spans": [
{
"start": 727,
"end": 738,
"text": "(Lin, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 745,
"end": 773,
"text": "(Leacock and Chodorow, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 846,
"end": 860,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF15"
},
{
"start": 1448,
"end": 1470,
"text": "Pedersen et al. (2004)",
"ref_id": "BIBREF13"
},
{
"start": 1586,
"end": 1615,
"text": "(Toutanova and Manning, 2000)",
"ref_id": "BIBREF19"
},
{
"start": 1759,
"end": 1786,
"text": "(Miller and Fellbaum, 1998)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 403,
"end": 417,
"text": "Tables 3 and 4",
"ref_id": "TABREF3"
},
{
"start": 2316,
"end": 2323,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Similarity Calculation",
"sec_num": "2.3"
},
{
"text": "Paragraph2Sentence: Both systems outperformed all the baselines for both metrics. The best score for this subtask was .837 and our systems achieved .732 and .698 on Pearson and did similar on Spearman metric. These scores are promising since our current unsupervised systems are based on bag-ofwords approach -they do not utilize any syntactic information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "3"
},
{
"text": "Sentence2Phrase: In this subtask, AI-KU systems outperformed all baselines with the exception of the AI-KU 2 system which performed slightly worse than LCS on Spearman metric. Performances of systems and baselines were lower than Para- ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "3"
},
{
"text": "In this work, we introduced two unsupervised systems that utilize co-occurrence statistics and represent textual units as dense, low dimensional embeddings. Although current systems are based on bag-of-word approach and discard the syntactic information, they achieved promising results in both paragraph2sentence and sentence2phrase subtasks. For future work, we will extend our algorithm by adding syntactic information (e.g, dependency parsing output) into the co-occurrence modeling step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "http://alt.qcri.org/semeval2014/ task3/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available here: http://wacky.sslmit.unibo.it",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "lin similarity = 2 * IC(lcs)/(IC(A) + IC(B)) where lcs indicates the least common subsumer of concepts A and B.6 The exact formulation is \u2212log(L/2d) where L is the shortest path length and d is the taxonomy depth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "AI-KU: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation",
"authors": [
{
"first": "Osman",
"middle": [],
"last": "Baskaya",
"suffix": ""
},
{
"first": "Enis",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "Volkan",
"middle": [],
"last": "Cirik",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics (*SEM)",
"volume": "2",
"issue": "",
"pages": "300--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Osman Baskaya, Enis Sert, Volkan Cirik, and Deniz Yuret. 2013. AI-KU: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation. In Proceedings of the Second Joint Conference on Lexical and Computational Se- mantics (*SEM), Volume 2: Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 300-306.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning entailment relations by global graph structure optimization",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "1",
"pages": "73--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2012. Learning entailment relations by global graph structure optimization. Computational Linguistics, 38(1):73-111.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "The Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Introducing and evaluating ukwac, a very large web-derived corpus of english",
"authors": [
{
"first": "Adriano",
"middle": [],
"last": "Ferraresi",
"suffix": ""
},
{
"first": "Eros",
"middle": [],
"last": "Zanchetta",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Bernardini",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 4th Web as Corpus Workshop (WAC-4)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of english. In In Proceedings of the 4th Web as Corpus Work- shop (WAC-4).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Euclidean embedding of cooccurrence data",
"authors": [
{
"first": "Gal",
"middle": [],
"last": "Amir Globerson",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Chechik",
"suffix": ""
},
{
"first": "Naftali",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tishby",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Machine Learning Research",
"volume": "8",
"issue": "10",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Globerson, Gal Chechik, Fernando Pereira, and Naftali Tishby. 2007. Euclidean embedding of co- occurrence data. Journal of Machine Learning Re- search, 8(10).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Semantic text similarity using corpus-based word similarity and string similarity",
"authors": [
{
"first": "Aminul",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2008,
"venue": "ACM Transactions on Knowledge Discovery from Data (TKDD)",
"volume": "2",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aminul Islam and Diana Inkpen. 2008. Semantic text similarity using corpus-based word similarity and string similarity. ACM Transactions on Knowledge Discovery from Data (TKDD), 2(2):10.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2014 task 3: Cross-level semantic similarity",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Jurgens, Mohammed Taher Pilehvar, and Roberto Navigli. 2014. Semeval-2014 task 3: Cross-level semantic similarity. In Proceedings of the 8th International Workshop on Semantic Evalu- ation (SemEval-2014). August 23-24, 2014, Dublin, Ireland.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Combining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database",
"authors": [
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Chodorow",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "49",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudia Leacock and Martin Chodorow. 1998. Com- bining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265-283.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1, pages 71-78.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "ICML",
"volume": "98",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In ICML, volume 98, pages 296- 304.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sphere Embedding: An Application to Partof-Speech Induction",
"authors": [
{
"first": "Yariv",
"middle": [],
"last": "Maron",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lamar",
"suffix": ""
},
{
"first": "Elie",
"middle": [],
"last": "Bienenstock",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems 23",
"volume": "",
"issue": "",
"pages": "1567--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yariv Maron, Michael Lamar, and Elie Bienenstock. 2010. Sphere Embedding: An Application to Part- of-Speech Induction. In J Lafferty, C K I Williams, J Shawe-Taylor, R S Zemel, and A Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1567-1575.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wordnet: An electronic lexical database",
"authors": [
{
"first": "George",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Miller and Christiane Fellbaum. 1998. Word- net: An electronic lexical database.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Techniques for improving web retrieval effectiveness. Information processing & management",
"authors": [
{
"first": "Eui-Kyu",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Dong-Yul",
"middle": [],
"last": "Ra",
"suffix": ""
},
{
"first": "Myung-Gil",
"middle": [],
"last": "Jang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "41",
"issue": "",
"pages": "1207--1223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eui-Kyu Park, Dong-Yul Ra, and Myung-Gil Jang. 2005. Techniques for improving web retrieval ef- fectiveness. Information processing & management, 41(5):1207-1223.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Wordnet:: Similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Demonstration Papers at HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet:: Similarity: measuring the re- latedness of concepts. In Demonstration Papers at HLT-NAACL 2004, pages 38-41.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Align, disambiguate and walk: A unified approach for measuring semantic similarity",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mohammad Taher Pilehvar",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Jurgens",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, disambiguate and walk: A unified approach for measuring semantic similarity. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics (ACL 2013).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. arXiv preprint cmp-lg/9511007.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic word sense discrimination",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "1",
"pages": "97--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1998. Automatic word sense dis- crimination. Computational Linguistics, 24(1):97- 123.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Machine learning in automated text categorization",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM computing surveys (CSUR)",
"volume": "34",
"issue": "1",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabrizio Sebastiani. 2002. Machine learning in auto- mated text categorization. ACM computing surveys (CSUR), 34(1):1-47.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to rank answers to nonfactoid questions from web collections",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Zaragoza",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics",
"volume": "37",
"issue": "2",
"pages": "351--383",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to non- factoid questions from web collections. Computa- tional Linguistics, 37(2):351-383.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Enriching the knowledge sources used in a maximum entropy part-of-speech tagger",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "13",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Christopher D Manning. 2000. Enriching the knowledge sources used in a maxi- mum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th An- nual Meeting of the Association for Computational Linguistics-Volume 13, pages 63-70.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From Frequency to Meaning: Vector Space Models of Semantics",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney and Patrick Pantel. 2010. From Fre- quency to Meaning: Vector Space Models of Se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning syntactic categories using paradigmatic representations of word context",
"authors": [
{
"first": "Enis",
"middle": [],
"last": "Mehmet Ali Yatbaz",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Sert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "940--951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehmet Ali Yatbaz, Enis Sert, and Deniz Yuret. 2012. Learning syntactic categories using paradigmatic representations of word context. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 940-951.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"text": "Instance id-word pairs for a given sentence.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF2": {
"html": null,
"text": "Contexts and substitute distributions when a bigram language model is used. w and n denote an arbitrary word in the vocabulary and the vocabulary size, respectively.",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">System Pearson Spearman</td></tr><tr><td>Paragraph-2-Sentence</td><td>AI-KU 1 AI-KU 2 LCS lch lin JI</td><td>0.671 0.542 0.499 0.584 0.568 0.613</td><td>0.676 0.531 0.602 0.596 0.562 0.644</td></tr></table>"
},
"TABREF3": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF5": {
"html": null,
"text": "Sentence2phrase subtask scores for the training data.",
"num": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF7": {
"html": null,
"text": "",
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Paragraph-2-Sentence subtask scores for</td></tr><tr><td>the test data. Best indicates the best correlation</td></tr><tr><td>score for the subtask. LCS stands for Normalized</td></tr><tr><td>Longest Common Substring. Subscripts in AI-KU</td></tr><tr><td>systems specify the run number.</td></tr></table>"
},
"TABREF9": {
"html": null,
"text": "Sentence2phrase subtask scores for the test data. graph2Sentence subtask, since smaller textual units (such as phrases) make the problem more difficult.",
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}