Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S13-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:42:23.986156Z"
},
"title": "IBM_EG-CORE: Comparing multiple Lexical and NE matching features in measuring Semantic Textual similarity",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Noeman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Cairo Technology",
"location": {
"postBox": "P.O. Box 166",
"settlement": "Al-Ahram",
"country": "Egypt"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present in this paper the systems we participated with in the Semantic Textual Similarity task at SEM 2013. The Semantic Textual Similarity Core task (STS) computes the degree of semantic equivalence between two sentences where the participant systems will be compared to the manual scores, which range from 5 (semantic equivalence) to 0 (no relation). We combined multiple text similarity measures of varying complexity. The experiments illustrate the different effect of four feature types including direct lexical matching, idf-weighted lexical matching, modified BLEU N-gram matching and named entities matching. Our team submitted three runs during the task evaluation period and they ranked number 11, 15 and 19 among the 90 participating systems according to the official Mean Pearson correlation metric for the task. We also report an unofficial run with mean Pearson correlation of 0.59221 on STS2013 test dataset, ranking as the 3 rd best system among the 90 participating systems.",
"pdf_parse": {
"paper_id": "S13-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We present in this paper the systems we participated with in the Semantic Textual Similarity task at SEM 2013. The Semantic Textual Similarity Core task (STS) computes the degree of semantic equivalence between two sentences where the participant systems will be compared to the manual scores, which range from 5 (semantic equivalence) to 0 (no relation). We combined multiple text similarity measures of varying complexity. The experiments illustrate the different effect of four feature types including direct lexical matching, idf-weighted lexical matching, modified BLEU N-gram matching and named entities matching. Our team submitted three runs during the task evaluation period and they ranked number 11, 15 and 19 among the 90 participating systems according to the official Mean Pearson correlation metric for the task. We also report an unofficial run with mean Pearson correlation of 0.59221 on STS2013 test dataset, ranking as the 3 rd best system among the 90 participating systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Semantic Textual Similarity (STS) task at SEM 2013 is to measure the degree of semantic equivalence between pairs of sentences as a graded notion of similarity. Text Similarity is very important to many Natural Language Processing applications, like extractive summarization (Salton et al., 1997) , methods for automatic evaluation of machine translation (Papineni et al., 2002) , as well as text summarization (Lin and Hovy, 2003) . In Text Coherence Detection (Lapata and Barzilay, 2005) , sentences are linked together by similar or related words. For Word Sense Disambiguation, researchers (Banerjee and Pedersen, 2003; Guo and Diab, 2012a) introduced a sense similarity measure using the sentence similarity of the sense definitions. In this paper we illustrate the different effect of four feature types including direct lexical matching, idf-weighted lexical matching, modified BLEU N-gram matching and named entities matching. The rest of this paper will proceed as follows, Section 2 describes the four text similarity features used. Section 3 illustrates the system description, data resources as well as Feature combination. Experiments and Results are illustrated in section 4. then we report our conclusion and future work.",
"cite_spans": [
{
"start": 279,
"end": 300,
"text": "(Salton et al., 1997)",
"ref_id": "BIBREF5"
},
{
"start": 359,
"end": 382,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF7"
},
{
"start": 415,
"end": 435,
"text": "(Lin and Hovy, 2003)",
"ref_id": "BIBREF2"
},
{
"start": 466,
"end": 493,
"text": "(Lapata and Barzilay, 2005)",
"ref_id": "BIBREF8"
},
{
"start": 598,
"end": 627,
"text": "(Banerjee and Pedersen, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 628,
"end": 648,
"text": "Guo and Diab, 2012a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our system measures the semantic textual similarity between two sentences through a number of matching features which should cover four main dimensions: i) Lexical Matching ii) IDF-weighted Lexical Matching iii) Contextual sequence Matching (Modified BLEU Score), and iv) Named Entities Matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Similarity Features",
"sec_num": "2"
},
{
"text": "First we introduce the alignment technique used. For a sentence pair {s1, s2} matching is done in each direction separately to detect the sub-sentence of s1 matched to s2 and then detect the subsentence of s2 matched to s1. For each word wi in s1 we search for its match wj in s2 according to matching features. S1: w0 w1 w2 w3 w4 \u2026... wi \u2026... wn S2: w0 w1 w2 w3 w4 \u2026.......wj \u2026......... wm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Similarity Features",
"sec_num": "2"
},
{
"text": "In this feature we handle the two sentences as bags of words to be matched using three types of matching, given that all stop words are cleaned out before matching: I) Exact word matching. II) Stemmed word matching: I used Porter Stemming algorithm (M.F. Porter, 1980) in matching, where it is a process for removing the commoner morphological and inflectional endings from words in English. Stemming will render inflections like \"requires, required, requirements, ...\" to \"requir\" so they can be easily matched III) Synonyms matching: we used a corpus based dictionary of 58,921 entries and their equivalent synonyms. The next section describes how we automatically generated this language resource.",
"cite_spans": [
{
"start": 249,
"end": 268,
"text": "(M.F. Porter, 1980)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Matching:",
"sec_num": "2.1"
},
{
"text": "We used the three matching criteria used in Lexical Matching after weighting them with Inverse-Document-Frequency. we applied the aggregation strategy by Mihalcea et al. (2006) : The sum of the idf-weighted similarity scores of each word with the best-matching counterpart in the other text is computed in both directions. For a sentence pair s1, s2, if s1 consists of m words {w0, w1, \u2026., w(m-1)} and s2 consists of n words {w0, w1, \u2026., w(n-1)} ,after cleaning stop words from both, and the matched words are \"@Matched_word_List\" of \"k\" words, then",
"cite_spans": [
{
"start": 154,
"end": 176,
"text": "Mihalcea et al. (2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IDF-weighted Lexical Matching",
"sec_num": "2.2"
},
{
"text": "We used a modified version of Bleu score to measure n-gram sequences matching, where for sentence pair s1, s2 we align the matched words between them (through exact, stem, synonyms match respectively). Bleu score as presented by (K. Papineni et al., 2002) is an automated method for evaluating Machine Translation. It compares ngrams of the candidate translation with the n-grams of the reference human translation and counts the number of matches. These matches are position independent, where candidate translations with unmatched length to reference translations are penalized with Sentence brevity penalty. This helps in measuring n-gram similarity in sentences structure. We define \"matched sequence\" of a sentence S1 as the sequence of words {wi, wi+1, wi+2, \u2026.. wj}, where wi, and wj are the first and last words in sentence S1 that are matched with words in S2. For example in sentence pair S1, S2 We measure the Bleu score such that: Bleu{S1, S2} = &BLEU(S1_stemmed,\"Matched sequence of S2\"); Bleu{S2, S1} = &BLEU(S2_stemmed,\"Matched sequence of S1\"); The objective of trimming the excess words outside the \"Matched Sequence\" range, before matching is to make use of the Sentence brevity penalty in case sentence pair S1, S2 may be not similar but having matched lengths.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "Papineni et al., 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Sequence Matching (Modified BLEU score)",
"sec_num": "2.3"
},
{
"text": "Named entities carry an important portion of sentence semantics. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entities Matching",
"sec_num": "2.4"
},
{
"text": "Sentence1: In Nigeria , Chevron has been accused by the All -Ijaw indigenous people of instigating violence against them and actually paying Nigerian soldiers to shoot protesters at the Warri naval base . Sentence2: In Nigeria , the whole ijaw indigenous showed Chevron to encourage the violence against them and of up to pay Nigerian soldiers to shoot the demonstrators at the naval base from Warri .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entities Matching",
"sec_num": "2.4"
},
{
"text": "The underlined words are Named entities of different types \"COUNTRY, ORG, PEOPLE, LOC, EVENT_VIOLENCE\" which capture the most important information in each sentence. Thus named entities matching is a measure of semantic matching between the sentence pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entities Matching",
"sec_num": "2.4"
},
{
"text": "3 System Description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entities Matching",
"sec_num": "2.4"
},
{
"text": "All data is tokenized, stemmed, and stop words are cleaned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Resources and Processing",
"sec_num": "3.1"
},
{
"text": "i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "Inverse Document Frequency (IDF) language resource: The document frequency df(t) of a term t is defined as the number of documents in a large collection of documents that contain a term \"t\". Terms that are likely to appear in most of the corpus documents reflect less importance than words that appear in specific documents only. That's why the Inverse Document Frequency is used as a measure of term importance in information retrieval and text mining tasks. We used the LDC English Gigaword Fifth Edition (LDC2011T07) to generate our idf dictionary. LDC Gigaword contains a huge collection of newswire from (afp, apw, cna, ltw, nyt, wpb, and xin). The generated idf resource contains 5,043,905 unique lower cased entries, and then we generated a stemmed version of the idf dictionary contains 4,677,125 entries. The equation below represents the idf of term t where N is the total number of documents in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "ii. English Synonyms Dictionary: Using the Phrase In our system we used the phrase table of the Direct Translation Model 2 (DTM2) (Ittycheriah and Roukos, 2007) SMT system, where each sentence pair in the training corpus was wordaligned, e.g. using a MaxEnt aligner (Ittycheriah and Roukos, 2005) or an HMM aligner (Ge, 2004) . then Block Extraction step is done. The generated phrase table contains candidate phrase to phrase translation pairs with source-to-target and target-to source translation probabilities. However the open source Moses SMT system (Koehn et al., 2007) For each English Phrase \"e1\" { @ar_phrases = list of Arabic Phrases aligned to \"e\" in the phrase table;",
"cite_spans": [
{
"start": 130,
"end": 160,
"text": "(Ittycheriah and Roukos, 2007)",
"ref_id": "BIBREF6"
},
{
"start": 266,
"end": 296,
"text": "(Ittycheriah and Roukos, 2005)",
"ref_id": null
},
{
"start": 315,
"end": 325,
"text": "(Ge, 2004)",
"ref_id": null
},
{
"start": 556,
"end": 576,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "For each a (@ar_phrases) { @en_phrases = list of English phrases aligned to \"a\" in the phrase table;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "For each e2 (@en_phrases) { $Prob(e2\\e1) = Prob(a\\e1)*Prob(e2\\a); } } } can be used in the same way to generate a synonyms dictionary from phrase table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "By applying the steps in figure (1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 34,
"text": "figure (1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus based resources:",
"sec_num": null
},
{
"text": "\u2022 WordNet (Miller, 1995): is a large lexical database of English. Nouns, verbs, adjectives and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. WordNet groups words together based on their meanings and interlinks not just word forms-strings of letters-but specific senses of words. As a result, words that are found in close proximity to one another in the network are semantically disambiguated. Second, WordNet labels the semantic relations among words. Using WordNet, we can measure the semantic similarity or relatedness between a pair of concepts (or word senses), and by extension, between a pair of sentences. We use the similarity measure described in (Wu and Palmer, 1994) which finds the path length to the root node from the least common subsumer (LCS) of the two word senses which is the most specific word sense they share as an ancestor.",
"cite_spans": [
{
"start": 789,
"end": 810,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dictionary based resources:",
"sec_num": null
},
{
"text": "The feature combination step uses the precomputed similarity scores. Each of the text similarity features can be given a weight that sets its importance. Mathematically, the text similarity score between two sentences can be formulated using a cost function weighting the similarity features as follows: N.B.: The similarity score according to the features above is considered as a directional score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Combination",
"sec_num": "3.2"
},
{
"text": "Similarity(s1, s2) = [w1*Lexical_Score(s1, s2) + w2*IDF_Lexical_Score(s1, s2) + w3*Modified_BLEU(s1, s2) + w4*NE_Score(s1, s2)] / (w1+w2+w3+w4) Similarity(s2, s1) = [w1*Lexical_Score(s2, s1) + w2*IDF_Lexical_Score(s2, s1) + w3*Modified_BLEU(s2, s1) + w4*NE_Score(s2, s1)] / (w1+w2+w3+w4) Overall_Score = 5/2*[Similarity(s1, s2)+Similarity(s2, s1)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Combination",
"sec_num": "3.2"
},
{
"text": "where w1, w2, w3, w4 are the weights assigned to the similarity features (lexical, idf-weighted, modified_BLEU, and NE_Match features respectively). The similarity score will be normalized over (w1+w2+w3+w4). In our experiments, the weights are tuned manually without applying machine learning techniques. We used both *SEM 2012 training and testing data sets for tuning these weights to get the best feature weighting combination to get highest Pearson Correlation score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Combination",
"sec_num": "3.2"
},
{
"text": "Our experiments showed that some features are more dominant in affecting the similarity scoring than others. We performed a separate experiment for each of the four feature types to illustrate their effect on textual semantic similarity measurement using direct lexical matching, stemming matching, synonyms matching, as well as (stem+synonyms) matching. Table ( The submitted runs IBM_EG-run2, IBM_EG-run5, IBM_EG-run6 are the three runs with feature weighting and experiment set up that performed best on STS 2012 training and testing data sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table (",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": null
},
{
"text": "Run 2: In this run the word matching was done on exact, and synonyms match only. Stemmed word matching was not introduced in this experiment. we tried the following weighting between similarity feature scores, where we decreased the weight of BLEU scoring feature to 0.5, and increased the idf_Lexical match weight of 3.5. this is because our initial tuning experiments showed that increasing the idf lexical weight compared to BLEU weight gives improved results. The NE matching feature weight was as follows: NE_weight = 1.5* percent of NE word to sentence word count = 1.5* (NE_words_count/Sentence_word_count)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": null
},
{
"text": "Run 5: In this experiment we introduced Porter stemming word matching, as well as stemmed synonyms matching (after generating a stemmed version of the synonyms dictionary). BLEU score feature was removed from this experiment, while keeping the idf-weight= 3, lexical-weight = 1, and NE-matching feature weight = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": null
},
{
"text": "Run 6: For this run we kept only IDF-weighted lexical matching feature which proved to be the dominant feature in the previous runs, in addition to Porter stemming word matching, and stemmed synonyms matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": null
},
{
"text": "Data: the training data of STS 2013 Core task consist of the STS 2012 train and test data. This data covers 5 datasets: paraphrase sentence pairs (MSRpar), sentence pairs from video descriptions (MSRvid), MT evaluation sentence pairs (SMTnews and SMTeuroparl) and gloss pairs (OnWN).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Submitted Runs",
"sec_num": null
},
{
"text": "System outputs will be evaluated according to the official scorer which computes weighted Mean Pearson Correlation across the evaluation datasets, where the weight depends on the number of pairs in each dataset. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Training Data",
"sec_num": null
},
{
"text": "The best configuration of our system was IBM_EG-run6 which was ranked #11 for the evaluation metric Mean (r = 0.5502) when submitted during the task evaluation period . Run6 as illustrated before was planned to measure idfweighted lexical matching feature only, over Porter stemmed, and stemmed synonyms words. However when revising this experiment set up during preparing the paper, after the evaluation period, we found that the English-to-English synonyms table was not correctly loaded during matching, thus skipping synonyms matching feature from this run. So the official result IBM_EG-run6 reports only idf-weighted matching over Porter stemmed bag of words. By fixing this and replicating the experiment IBM_EG-run6-UnOfficial as planned to be, the mean Pearson correlation jumps 4 points (r = 0.59221) which ranks this system as the 3 rd system among 90 submitted systems very slightly below the 2 nd system (only 0.0006 difference on the mean correlation metric). In ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Test Data:",
"sec_num": null
},
{
"text": "One unofficial run was performed after the evaluation submission deadline due to the tight schedule of the evaluation. This experiment introduces the effect of WordNet Wu and Palmer similarity measure on the configuration of Run5 (Porter stemming word matching, with synonyms matching, zero weight for BLEU score feature, while keeping the idf-weight= 3, lexical-weight = 1, and NE-matching feature weight = 1) From the results in Table (6) it is clear that Corpus based synonyms matching outperforms dictionarybased WordNet matching over SEM2013 testset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results of un-official run:",
"sec_num": null
},
{
"text": "We proposed an unsupervised approach for measuring semantic textual similarity based on Lexical matching features (with porter stemming matching and synonyms matching), idf-Lexical matching features, Ngram Frquency (Modified BLEU) matching feature, as well as Named Entities matching feature combined together with a weighted cost function. Our experiments proved that idf-weighted Lexical matching in addition to porter stemming and synonyms-matching features perform best on most released evaluation datasets. Our best system officially ranked number 11 among 90 participating system reporting a Pearson Mean correlation score of 0.5502. However our best experimental set up \"idf-weighted Lexical matching in addition to porter stemming and synonyms-matching\" reported in an unofficial run a mean correlation score of 0.59221 which ranks the system as number 3 among the 90 participating systems. In our future work we intend to try some machine learning algorithms (like AdaBoost for example) for weighting our similarity matching feature scores. Also we plan to extend the usage of synonyms matching from the word level to the ngram phrase matching level, by modifying the BLEU Score N-gram matching function to handle synonym phrases matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We would like to thank the reviewers for their constructive criticism and helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Theory of Parsing, Translation and Compiling",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "Computing Reviews",
"volume": "1",
"issue": "11",
"pages": "503--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred. V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ. American Psychological Association. 1983. Publications Manual. American Psychological Association, Washington, DC. Association for Computing Machinery. 1983. Computing Reviews, 24(11):503-512.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic evaluation of summaries using n-gram cooccurrence statistics",
"authors": [
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
},
{
"first": "E",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Human Language Technology Conference (HLT-NAACL 2003)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Y. Lin and E. H. Hovy. 2003. Automatic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of Human Language Technology Conference (HLT-NAACL 2003), Edmonton, Canada, May.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Improved statistical machine translation using paraphrases",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of HLT-NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Algorithms on Strings, Trees and Sequences",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Gusfield",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Gusfield. 1997. Algorithms on Strings, Trees and Sequences. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Term weighting approaches in automatic text retrieval",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1997,
"venue": "Readings in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton and C. Buckley. 1997. Term weighting approaches in automatic text retrieval. In Readings in Information Retrieval. Morgan Kaufmann Publishers, San Francisco, CA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Direct translation model 2",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ittycheriah, A. and Roukos, S. (2007). Direct translation model 2. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pp.57-64, Rochester, NY.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Cambridge, UK.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic evaluation of text coherence: Models and representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 19th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Lapata and R. Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, Edinburgh.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Statistical Phrase-Based Translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. Of the Human Language Technology Conference, HLTNAACL' 2003",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Koehn, F.J. Och, and D. Marcu. 2003. Statistical Phrase-Based Translation. Proc. Of the Human Language Technology Conference, HLTNAACL' 2003, May.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u02c7rej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL 2007 Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u02c7rej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 177-180.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the American Association for Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mihalcea , C. Corley, and C. Strapparava 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the American Association for Artificial Intelligence. (Boston, MA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Extended gloss overlaps as a measure of semantic relatedness",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 18th International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "805--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Ted Pedersen. 2003. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, pages 805-810.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WordNet::Similarity -Measuring the Relatedness of Concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi, 2004, WordNet::Similarity -Measuring the Relatedness of Concepts. Proceedings of Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2004).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Verb semantics and lexical selection",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "32nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, Z., and Palmer, M. 1994. Verb semantics and lexical selection. In 32nd Annual Meeting of the Association for Computational Linguistics, 133-138.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning the latent semantics of a concept from its definition",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Guo and Mona Diab. 2012a. Learning the latent semantics of a concept from its definition. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Figure (1) shows the steps:Figure(1) English phrase-to-phrase synonyms generation from E2A phrase table.",
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>an Arabic-to-English Direct</td></tr><tr><td>Translation Model, we generated English-to-</td></tr><tr><td>English phrase table using the double-link of</td></tr><tr><td>English-to-Arabic and Arabic-to-English</td></tr><tr><td>phrase translation probabilities over all pivot</td></tr><tr><td>Arabic phrases. Then English-to-English</td></tr><tr><td>translation probabilities are normalized over</td></tr><tr><td>all generated English synonyms. (Chris</td></tr><tr><td>Callison-Burch et al, 2006) used a similar</td></tr><tr><td>technique to generate paraphrases to improve</td></tr><tr><td>their SMT system.</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "): a) English phrase-to-phrase synonymstable (or English-to-English phrase table), by applying the steps in a generic way. b) English word-to-word synonyms table, by limiting the generation over English single word phrases. For example, to get all possible synonyms of the English word \"bike\", we used all the Arabic phrases that are aligned to \"bike\" in the phrase table { \u202b\u0627\u0644\u0628\u0633\u0643\u0644\u062a\u202c | bike | then we get all the English words in the phrase table aligned to these Arabic translations { \u202b\u062f\u0631\u0627\u062c\u0629\u202c",
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">\u202b\u0627\u0644\u0628\u0633\u0643\u0644\u062a\u202c \u202b\u0627\u0644\u0628\u0633\u0643\u0644\u064a\u062a,\u202c</td><td>,</td><td colspan=\"2\">\u202b\u0627\u0644\u062f\u0631\u0627\u062c\u0627\u062a\u202c</td><td>,</td><td>\u202b\u062f\u0631\u0627\u062c\u0629\u202c</td><td>},</td></tr><tr><td colspan=\"7\">P: 1905645 14 0.0142582 0.170507 | \u202b\u062f\u0631\u0627\u062c\u0629\u202c | bike |</td></tr><tr><td colspan=\"7\">P: 1910841 25 0.0262152 0.221198 | \u202b\u0627\u0644\u062f\u0631\u0627\u062c\u0627\u062a\u202c | bike |</td></tr><tr><td colspan=\"7\">P: 2127826 4 0.0818182 0.0414747 | \u202b\u0627\u0644\u0628\u0633\u0643\u0644\u064a\u062a\u202c | bike |</td></tr><tr><td colspan=\"7\">P: 2396796 2 0.375 0.0138249 | ,</td></tr><tr><td colspan=\"2\">\u202b\u0627\u0644\u0628\u0633\u0643\u0644\u062a\u202c \u202b\u0627\u0644\u0628\u0633\u0643\u0644\u064a\u062a,\u202c</td><td>,</td><td colspan=\"3\">\u202b\u0627\u0644\u062f\u0631\u0627\u062c\u0627\u062a\u202c</td><td>}</td></tr><tr><td colspan=\"7\">This results in an English word-to-word synonyms</td></tr><tr><td colspan=\"7\">list for the word \"bike\" like this:</td></tr><tr><td>bike:</td><td/><td/><td/><td/><td/></tr><tr><td>motorcycle</td><td colspan=\"6\">0.365253185010659</td></tr><tr><td colspan=\"4\">bicycle 0.198195663512781</td><td/><td/></tr><tr><td colspan=\"4\">cycling 0.143290354808692</td><td/><td/></tr><tr><td colspan=\"7\">motorcycles 0.0871686646772204</td></tr><tr><td>bicycles</td><td colspan=\"6\">0.0480779974950311</td></tr><tr><td>cyclists</td><td colspan=\"6\">0.0317670845504069</td></tr><tr><td colspan=\"7\">motorcyclists 0.0304152910853553</td></tr><tr><td colspan=\"4\">cyclist 0.0278451740161998</td><td/><td/></tr><tr><td colspan=\"4\">riding 0.0215366691148431</td><td/><td/></tr><tr><td>motorbikes</td><td colspan=\"6\">0.0148697281155676</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "1) reports the mean Pearson correlation results of these experiments on STS2012-test dataset",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Direct Stem</td><td>Synonyms</td><td>Synonyms +</td></tr><tr><td/><td/><td>only</td><td>only</td><td>Stem</td></tr><tr><td>NE</td><td colspan=\"3\">0.303 0.297 0.306</td><td>0.304</td></tr><tr><td colspan=\"2\">BLEU 0.439</td><td colspan=\"2\">0.446 0.469</td><td>0.453</td></tr><tr><td colspan=\"2\">Lexical 0.59</td><td colspan=\"2\">0.622 0.611</td><td>0.624</td></tr><tr><td>IDF</td><td>0.488</td><td colspan=\"2\">0.632 0.504</td><td>0.634</td></tr><tr><td colspan=\"5\">Table (1) reports the mean Pearson score for NE,</td></tr><tr><td colspan=\"5\">BLEU, Lexical, and idf-weighted matching features</td></tr><tr><td colspan=\"4\">respectively on STS2012-test dataset.</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">4), we report the official results achieved on</td></tr><tr><td colspan=\"4\">STS 2013 test data. While table (5), reports the</td></tr><tr><td colspan=\"4\">unofficial results achieved after activating the</td></tr><tr><td colspan=\"4\">synonyms matching feature in IBM_EG-run6</td></tr><tr><td colspan=\"4\">(unofficial) and comparing this run to the best two</td></tr><tr><td colspan=\"2\">reported systems.</td><td/><td/></tr><tr><td/><td>IBM_EG-</td><td>IBM_EG-</td><td>IBM_EG-</td></tr><tr><td/><td>run2</td><td>run5</td><td>run6</td></tr><tr><td colspan=\"2\">headlines 0.7217</td><td>0.7410</td><td>0.7447</td></tr><tr><td>OnWN</td><td>0.6110</td><td>0.5987</td><td>0.6257</td></tr><tr><td>FNWN</td><td>0.3364</td><td>0.4133</td><td>0.4381</td></tr><tr><td>SMT</td><td>0.3460</td><td>0.3426</td><td>0.3275</td></tr><tr><td>Mean</td><td>0.5365</td><td>0.5452</td><td>0.5502</td></tr><tr><td>Rank</td><td>#19</td><td>#15</td><td>#11</td></tr></table>",
"html": null,
"num": null
},
"TABREF8": {
"text": "Table (6) Un-Official Result on STS 2013 test datasets.",
"type_str": "table",
"content": "<table><tr><td colspan=\"3\">Table (6) reports the unofficial result achieved on</td></tr><tr><td colspan=\"3\">STS 2013 test data, compared to the Official run</td></tr><tr><td colspan=\"2\">IBM_Eg-run5.</td><td/></tr><tr><td/><td>Unofficial-Run</td><td>IBM_EG-run5</td></tr><tr><td>Mean</td><td>0.52682</td><td>0.5452</td></tr><tr><td colspan=\"2\">headlines 0.70018</td><td>0.7410</td></tr><tr><td>OnWN</td><td>0.60371</td><td>0.5987</td></tr><tr><td>FNWN</td><td>0.35691</td><td>0.4133</td></tr><tr><td>SMT</td><td>0.33875</td><td>0.3426</td></tr></table>",
"html": null,
"num": null
}
}
}
}