Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "L18-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:37:02.202780Z"
},
"title": "A Large Resource for Patterns of Verbal Paraphrases",
"authors": [
{
"first": "Octavian",
"middle": [],
"last": "Popescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yorktown Heights",
"location": {
"region": "US"
}
},
"email": "[email protected]"
},
{
"first": "Ngoc",
"middle": [],
"last": "Phuoc",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yorktown Heights",
"location": {
"region": "US"
}
},
"email": ""
},
{
"first": "An",
"middle": [],
"last": "Vo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yorktown Heights",
"location": {
"region": "US"
}
},
"email": ""
},
{
"first": "Vadim",
"middle": [],
"last": "Sheinin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yorktown Heights",
"location": {
"region": "US"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Paraphrases play an important role in natural language understanding, especially because there are fluent jumps between hidden paraphrases in a text. For example, even to get to the meaning of a simple dialog like I bought a computer. How much did the computer cost? involves quite a few steps. A computational system may actually have a huge problem in linking the two sentences as their connection is not overtly present in the text. However, it becomes easier if it has access to the following paraphrases: [HUMAN] buy [ARTIFACT] \u21d0\u21d2 [HUMAN] pay [PRICE] for [ARTIFACT] \u21d0\u21d2 [ARTIFACT] cost [HUMAN] [PRICE], and also to the information that I IsA [HUMAN] and computer IsA [ARTIFACT]. In this paper we introduce a resource of such paraphrases that was extracted by investigating large corpora in an unsupervised manner. The resource contains tens of thousands of such pairs and it is available for academic purposes.",
"pdf_parse": {
"paper_id": "L18-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Paraphrases play an important role in natural language understanding, especially because there are fluent jumps between hidden paraphrases in a text. For example, even to get to the meaning of a simple dialog like I bought a computer. How much did the computer cost? involves quite a few steps. A computational system may actually have a huge problem in linking the two sentences as their connection is not overtly present in the text. However, it becomes easier if it has access to the following paraphrases: [HUMAN] buy [ARTIFACT] \u21d0\u21d2 [HUMAN] pay [PRICE] for [ARTIFACT] \u21d0\u21d2 [ARTIFACT] cost [HUMAN] [PRICE], and also to the information that I IsA [HUMAN] and computer IsA [ARTIFACT]. In this paper we introduce a resource of such paraphrases that was extracted by investigating large corpora in an unsupervised manner. The resource contains tens of thousands of such pairs and it is available for academic purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When two phrases can be interchanged in a text without altering the meaning of the whole, we observe a paraphrasing relationship. Paraphrasing is a fundamental property of natural languages, and it is normally recognized as \"saying the same thing with different words\". Proposing a paraphrasing relation is a hard task for natural language processing (NLP) systems. The task of recognition of paraphrases was proposed in various SemEval competitions (Butnariu et al., 2009; Mihalcea et al., 2010; Specia et al., 2012; Xu et al., 2015) as an independent task or as a part of larger tasks like semantic similarity, textual entailment, etc. The resource we created is instrumental for all these tasks. Some paraphrases are made out of nominal phrases that contain only ostensible nouns and their adjectival determiners, like in a fourteen year old boy \u21d0\u21d2 a teenager. Another type of paraphrases are the ones that involve a verbal constituent, like abandon a kid \u21d0\u21d2 ignore parental obligations. This later type includes nominalizations, that is, even if the phrase has only noun constituents, at least one of them is a noun coming from a verb, like abandoning a kid. One major difference between nominal vs. verbal paraphrases is that the first ones are basically context independent, that is they can be substituted in a text directly, while the second are context dependent, their replacement in a sentence requires changes in the syntactic and semantic role of their complements and adjuncts. The resource we compiled contains a list of pairs, each member being centered on a verb and its arguments. A pair is a valid verbal paraphrase relation given a certain context that is represented via types associated with each argument. For a given sentence that contains only one verb phrase, we can extract a set of paraphrases. For example, in Figure 1 , for the sentence I pay 1,200 for a laptop from Bestbuy, we present a few valid verbal paraphrase extracted from the resource. In this example X, Y, Z, U are variable representing the head of syntactic components. [MONEY] , [HU-MAN] , [ORG] , [ARTIFACT] represent features that the variable must carry in order for a pair to be a valid paraphrase.",
"cite_spans": [
{
"start": 450,
"end": 473,
"text": "(Butnariu et al., 2009;",
"ref_id": "BIBREF0"
},
{
"start": 474,
"end": 496,
"text": "Mihalcea et al., 2010;",
"ref_id": null
},
{
"start": 497,
"end": 517,
"text": "Specia et al., 2012;",
"ref_id": null
},
{
"start": 518,
"end": 534,
"text": "Xu et al., 2015)",
"ref_id": null
},
{
"start": 2064,
"end": 2071,
"text": "[MONEY]",
"ref_id": null
},
{
"start": 2074,
"end": 2082,
"text": "[HU-MAN]",
"ref_id": null
},
{
"start": 2085,
"end": 2090,
"text": "[ORG]",
"ref_id": null
},
{
"start": 2093,
"end": 2103,
"text": "[ARTIFACT]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1839,
"end": 1848,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The verbal paraphrases occurring in this resource are patterns of verbal phrases, that is, they represent a generalization over various real instances in a text. The names of the features occurring on different syntactic slots in a paraphrase pattern are unimportant, but the class of the words that define the respective feature is important.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One of the main ideas for the acquisition and recognition of verbal paraphrases was introduced in a seminal paper (Lin and Pantel, 2001) . At the core of this approach lies the fact that paraphrases occur in the same context. A statistical approach based on the mutual information measure can filter out pairs of paraphrases from a given corpora. However, this approach cannot solve two important problems: first, it is not the words by themselves that make two expressions paraphrases but, it is actually the role these words play inside the whole sentence; second it is not clear how the complements and adjuncts are aligned between the pairs. Even if a word is very frequent, like I or you, it is the feature [HUMAN] carried by both that is actually relevant for the meaning of the verbal phrases. The second problem is very challenging, as the same type of constituent can appear in different syntactic positions in the two expressions. For example, the adjunct [SHOP] [PRICE] \". In this paper we describe a technique able to cope with these problems which lead to the building of a resource of pattern paraphrases.",
"cite_spans": [
{
"start": 114,
"end": 136,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF5"
},
{
"start": 966,
"end": 972,
"text": "[SHOP]",
"ref_id": null
},
{
"start": 973,
"end": 980,
"text": "[PRICE]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The technique to extract pattern paraphrases is driven by the idea behind chain clarifying relationships (see among others (Popescu and Magnini, 2007; Kawara et al., 2014; Popescu, 2013; Popescu et al., 2014) ). A chain clarifying relationship holds between the components of a verbal phrase if there is a unique combinations of senses that is legitimate. For example, in I saw the river's bank. vs. I saw a problem the verb see has two different meaning, perceive by sight vs. to understand. Also, bank has two meanings too, sloping land vs. financial institution, and problem has two meanings as well: state of difficulty, question raised. The combination of senses perceive by sight a state of difficulty is not legitimate, and neither understand a financial institutions is. In fact, in the sentence I saw the river bank, river imposes the sloping land reading to bank, which in turn imposes the perceive by light meaning on the verb see.",
"cite_spans": [
{
"start": 123,
"end": 150,
"text": "(Popescu and Magnini, 2007;",
"ref_id": "BIBREF12"
},
{
"start": 151,
"end": 171,
"text": "Kawara et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 172,
"end": 186,
"text": "Popescu, 2013;",
"ref_id": "BIBREF14"
},
{
"start": 187,
"end": 208,
"text": "Popescu et al., 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Paraphrases",
"sec_num": "3."
},
{
"text": "That is why we talk about a chain clarifying relationshipwords trigger the sense of other words in a chain like relation, as long as the words are components of phrases that have only one combination of senses possible. The chain clarification relation is not defined by words, which are just instances of lexical units bearing certain features. In the example above, any word which is defined by the [PHYSICAL OBJECT] feature imposes the meaning perceive by light to the verb see. From this point of view, both apple and book have the same role, as both are carrying the [PHYSICAL OBJECT] feature. However, this similarity is restricted to the chain clarifying relationship for the verb see. While apple and book are antagonistic with respect to the verb devour as they impose two different chain clarifications for this verb, namely eat vs. read avidly. Pattern paraphrases are pairs of chain clarifying relationships. The meaning of the whole verbal phrase is preserved, thereby creating a paraphrase relationship, by the fact that the same meaning of the verb and the same features are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pattern Paraphrases",
"sec_num": "3."
},
{
"text": "Large Corpora",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Pattern Paraphrases from",
"sec_num": "4."
},
{
"text": "The first step in the unsupervised extraction of pattern paraphrases is to consider a large corpus that is already parsed. We used Gigaword, LDC2012T21 (Napoles et al., 2012) . For each verb, we extracted the verbal dependents. Due to parser errors, there are many such dependency paths that are noise. To filter them out we used COMLEX, http://nlp.cs.nyu.edu/comlex/, (Grishman et al., 1994) . In the case where the direct object was governing a prepositional phrase, this prepositional phrase was included in the dependency path. In Figure 2 we see an example of such dependencies: the nsubj, dobj, iobj, prep * is the head word of the nominal group having the respective role in the dependency path, v marks the verb. As can be seen in this example, we also considered the partial paths, so the same sentence may lead to several instances of paths. We further removed low frequency verbs, low frequency paths so that from an initial 1, 244, 793, 787 paths we filtered out a large number of them and we arrived to 391, 410, 259 paths that represent the closest approximation to a verb sub-categorization frame we could get. These paths contain 7, 922, 730 nouns in different syntactic positions and 25,812 verbs, which lead to 487,703 verbal phrases. These paths represent the input to a feed forward neural network that predicts the similarity of context. In a sense, we implemented a generalization of the original Lin algorithm that finds the dependencies paths that have the most similar context. From another point of view, we could think of the model created by this NN as dependency paths embedding. See Figure 3 . e focused primarily on verbal groups, where a verbal group is defined by the following regular expression over dependency paths:",
"cite_spans": [
{
"start": 152,
"end": 174,
"text": "(Napoles et al., 2012)",
"ref_id": "BIBREF10"
},
{
"start": 369,
"end": 392,
"text": "(Grishman et al., 1994)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 535,
"end": 543,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1613,
"end": 1621,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Extracting Sub-categorization Framework for Verbal Phrases",
"sec_num": "4.1."
},
{
"text": "[sbj] + [obj|objprep]+[iobj]+[prepP |] * [prt]+v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Sub-categorization Framework for Verbal Phrases",
"sec_num": "4.1."
},
{
"text": "where sbj is the subject, obj is the object, obj prep matches the object and its governed preposition, if any, iobj is the indirect object, prepP is the attached prepositional group with its head, prt matches particles. For example, the following fragments of the dependency paths are matched by the above regular expression: putprt upwith, putobj questionon, john sbjwalkto store, leaf sbjtouchhim objon f ace. The obtained model cannot be used directly to predict paraphrasing, but its output represents a large list of candidates. The number of candidate pairs is 193, 628, 633. However, most of these pairs do not make it after the next filtering step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Sub-categorization Framework for Verbal Phrases",
"sec_num": "4.1."
},
{
"text": "In order to find the pattern paraphrases we need to find classes of words that are common between two candidates. However, due to the noise, we cannot get an accurate system of classes. Rather, we implemented a bootstrapping approach. For this, we used Zipf's law: the ambiguity of words is inversely proportional to its frequency rank. We started from verbs that according to WordNet are non ambiguous, therefore they have just one sense. We also considered linking verbs, make, get, have, be etc. together with their direct object and the propositional group, like make way for in Figure 4 . These are mono-sense expressions. Then, we considered only the high probability paraphrases for these, which contain the more ambiguous verbs. We keep in separate classes the verbs from the later category, those multi-sense expressions, according to the mono-sense verb they paraphrase. If a multi-sense verbal expression occurs with two different mono-senses in a paraphrase relationship, then it is discarded. This bootstrapping process continues till we reach the most ambiguous verbs. In Figure 4 we show an example of the bootstrapping process. The make way for is a mono-sense verbal phrase, unlike create or pave. But the fact that at step 1 we determined that make way for and create, pave are valid candidates for paraphrasing leads to the creation of a cluster inside all the occurrences of create, and a cluster inside pave. The same happens for very ambiguous verbs like create or accommodate. All the occurrences inside this cluster can be paraphrased via MAKE WAY FOR. At the next step we will compute a precise contextual definition of these clusters.",
"cite_spans": [],
"ref_spans": [
{
"start": 583,
"end": 591,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1086,
"end": 1092,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Boostrapping from Mono-sense Verbal Phrases",
"sec_num": "4.2."
},
{
"text": "The best way to find a set of features would be to have the agentive subject for each verb, like buyer for buy, with its preferred adjuncts in the set of paraphrases extracted from corpus. However, this kind of constructions are hardly present in a news corpus, as a sentence like buyer buys products is never used. We need to build the features for representing the pattern paraphrases in a bottom up approach, that is by finding the most general words that individualize that cluster vs all other clusters. In order to find the set of features for each verb separately we start from the clusters found at the previous step. Ideally, each cluster corresponds to a distinct sense of the verb. Inside each cluster, by considering the set of respective paraphrases, we compute the mutual information for each syntactic slot together with the word occurring in that syntactic slot, and rank them. At the top of this ranking we find the best representative words for that meaning, together with their syntactic functions. in the case of agentive verbs, we compute the Lin distance (Lin, 1998) on Wordnet (Miller, 1995) between it and the set of words occurring in that syntactic position, and we select the closest ones, for example nsubj customer , nsubj client, nsubj buyer are the winners for verb buy. So, like in Figure 1 , the paradigmatic set for variable X denoting the subject position is formed by these. Now, on the basis of the mutual information computed above, we find the most likely complements and their closest neighbors according to the Lin measure. The next step is to generalize the most likely fillers of verbal phrases as much as possible. This was carried out using the hypernym function from WordNet via SUMO ontology (Niles and Pease, 2003) . Each word is replaced by its direct hypernym as long as the newly created form is not found in two clusters. In Figure 5 we present schematically this generalization process for three classes for the verb move. The three cluster identified at the previous steps, C1, C2 and C3 have different fillers for subject and object position respectively. The process of feature generalization is carried out as long as the obtained form of the pattern stay within the original cluster, that is there is no form that exist in two clusters at any time. For example , the first cluster and the third cluster in Figure 5 collide on object position, so the generalization this syntactic position stops shortly, while for for subject position it can go on further.",
"cite_spans": [
{
"start": 1077,
"end": 1088,
"text": "(Lin, 1998)",
"ref_id": "BIBREF6"
},
{
"start": 1100,
"end": 1114,
"text": "(Miller, 1995)",
"ref_id": "BIBREF9"
},
{
"start": 1739,
"end": 1762,
"text": "(Niles and Pease, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1314,
"end": 1322,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1877,
"end": 1885,
"text": "Figure 5",
"ref_id": "FIGREF4"
},
{
"start": 2364,
"end": 2372,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Finding the Set of Features",
"sec_num": "4.3."
},
{
"text": "The first observation is that the set of paraphrases generated by the Lin algorithm with class embedding is very accurate when the ambiguity of the target word is low and the number of occurrences is high. In this case the noise in classification is as low as it could be and thus the class context describes precisely the correct usage of the target word. The second observation is that class embedding preserves the meaning of the verbal group, so the semantic similarity between the set of correct paraphrases must be very high. The third observation is that the senses of the verbal group are paraphrased differently by using class embedding and thus a void intersection of class embedding indicates that the set of candidate paraphrases are indeed correct paraphrases. These observations suggest the following post filtering over the paraphrase candidate strategy:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 S1 Identify low ambiguity, high frequency verbal groups",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 S2 Consider their set of candidate paraphrase;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 3 Find the subset that minimizes the semantic distance",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 S4 Consider the candidate paraphrase for each verbal group in this subset",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 S5 Retain only the paraphrases that are common in these candidate sets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "\u2022 S6 Repeat step 1 for the verbal groups in the retained paraphrase until the semantic distance is below a fixed threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "At S1 we used WordNet to decide on the ambiguity, and we used a linear combination of Lin distance with Roget similarity at step 3. The algorithm above produces a repository of paraphrases for each verbal groups. We obtained highly accurate paraphrases for 75,000 verbal groups, each verbal group being paraphrased in average with more than 250 paraphrases. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Seeds -Mono sense and frequent",
"sec_num": "4.4."
},
{
"text": "The slot alignment is carried out via a computation of the most probable combination of arguments. This computation takes place in two steps. At the first step we consider the maximally probable configuration for each pair of paraphrases and at the second step we choose from this set the one that is the most probable considering all possible paraphrases into a cluster. Let's consider again Figure 1 . After the generalizing the slots, we have the pairs of the verbal group only, that is, we know that buy, obtain, make a payment for, sell, get from, pay etc. can enter a paraphrasing relationship in the same class for the verb buy (step 1&2 above) and we also know that this class has [CUSTOMER], [ARTIFACT], [ORG] as features for subject, direct object and prepositional group for the verb buy (step 3). The verbs obtain, make payment for, sell, get from, pay have their own syntactic slots for slightly different features, as the generalization process does not necessarily lead to the same features, but to variants of them, for example client vs. customer, or the same feature occurs in several slots. First we employ a chain conditional formula for each pair of paraphrases in order to get the first one-to-one alignment. Given the form of one verbal phrase, we compute the probability that another verbal phrase has a certain realization. For example, we compute the probability that the verb sell has a certain configuration as p(nsubj = . In general, given two paraphrases in the same cluster, with t denoting the target slots, we compute",
"cite_spans": [
{
"start": 713,
"end": 718,
"text": "[ORG]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 393,
"end": 401,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Slot Alignment between Paraphrases",
"sec_num": "4.5."
},
{
"text": "argmax Xt,Yt,Zt p(X t , Y t , Z t ) given the distribution of X c , Y c , Z c , v c ) of source pattern and v t , V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Alignment between Paraphrases",
"sec_num": "4.5."
},
{
"text": "c the target and source verbs respectively. For this probability we use the chain formula and we calculate the necessary independent probabilities over the whole corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Alignment between Paraphrases",
"sec_num": "4.5."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(X t , Y t , Z t | v t , v c , X c , Y c , Z c ) \u2248 (1) p(v t | v c , X C , Y c , Z c ) * (2) p(X t | v c , v t , X C , Y C , Z C ) * (3) p(Y t | X t , v t , , v t , X C , Y C , Z C ) * (4) p(Z t | Y t , Z t , v c , , v t , X C , Y C , Z C )",
"eq_num": "(5)"
}
],
"section": "Slot Alignment between Paraphrases",
"sec_num": "4.5."
},
{
"text": "(2) is the probability of v t and v c being paraphrase relation when the words of \u2212c are used, (3),(4) and (5) represent the probability of each slot for v t for a given word, given that the v t and v c are in the same cluster of paraphrases. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Alignment between Paraphrases",
"sec_num": "4.5."
},
{
"text": "To evaluate paraphrases is a very difficult task, because there is not a gold standard. For limited data, human experts can verify manually the validity of some of them. However, our approach, PP, produces hundreds of different paraphrases for each verbal phrase. We selected a set of 100 verbal expressions among which abandon, be expert on, begin, buy, employ, expect, have address on, manage, plan , produce, solve work on, write etc. We implemented the original DIRT algorithm (Lin and Pantel, 2001) and ran it for these 100 verbs over Giga-Word, call it D100 G. We also considered word2vec with min max average D100 G 40 80 70 S w2V 30 90 76 G w2V 30 90 85 PPDB 60 90 79 PP 70 100 90 Table 1: Precision s-level min max average D100 G 24 33 27 S w2V 37 62 58 G w2V 37 68 59 PPDB 40 70 71 PP 60 100 82 Table 4 : annotator inter-agreement the standard Google news model, call it S w2v, and trained form GigaWord, call it G w2v. Finally, we considered the set of paraphrases from PPDB (Ganitkevitch et al., 2013) . The PPDB has a few levels of accuracy, s which very precise, small coverage, m, the medium precision and coverage, and l that is the large coverage, lower precision. As in PPDB, there are instances of paraphrases at the sentence level, we extracted 100 sentences from GigaWord for each verbal phrases, for a total of 10,000 sentences. We carried out two evaluation experiments. The first one focuses on pairs of verbal paraphrases, without considering any context. The second one considers the context around the verbal phrases in a given sentence and proposes a new paraphrase , if available. This second experiment cannot be carried out for DIRT, or w2v because these approach do not handle the context, so the systems evaluated here are ours, PP, and PPDB.",
"cite_spans": [
{
"start": 481,
"end": 503,
"text": "(Lin and Pantel, 2001)",
"ref_id": "BIBREF5"
},
{
"start": 1027,
"end": 1054,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 621,
"end": 845,
"text": "G 40 80 70 S w2V 30 90 76 G w2V 30 90 85 PPDB 60 90 79 PP 70 100 90 Table 1: Precision s-level min max average D100 G 24 33 27 S w2V 37 62 58 G w2V 37 68 59 PPDB 40 70 71 PP 60 100 82",
"ref_id": "TABREF2"
},
{
"start": 846,
"end": 853,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Experiments",
"sec_num": "5."
},
{
"text": "For the 100 chosen verbs, we put together all the paraphrases created by each approach. For the DIRT and word2vec approaches we have to set a threshold under which two pairs are not consider paraphrases, as these approaches compute a score for each possible pair. We consider the first 10, 40 and 400 pairs, which create three levels of precision which we roughly equate with the s,m,l levels from PPDB. These thresholds were not exactly a random choice, because 10 is the average number existing in VerbOcean (Chklovski and Pantel, 2004) , a paraphrase resource created with DIRT algorithm, while 40 is the standard number of similar phrases returned by word2vec. We also ranked the PP created by our approach based on the probability of occurrence of each pattern. In this way , we could have the same levels of 10, 40 and 400 paraphrases. So we create three distinct test corpora where each verb had the first 10, 40 and 400 returns from our approach, DIRT, word2vec and PPDB, respectively. There are not exactly 4000, 16 000, and 160 000 pairs of paraphrases, as some of the above resources may not have provided the required number of paraphrases. In the end we have three test corpora for the s,m,l level. Our experiment consists in extract-ing random samples from each of the test corpus and in evaluating their accuracy. We can now estimate how many pairs were correct on average for each approach, and how many correct paraphrases were contributed to the pool of correct paraphrases by each approach. We have three annotators, each one checking 2,500 pairs for correctness, out of which 250 were from the s level, 750 from the m level and 1,500 from the l level . Out of these 7, 500 pairs, 900 where common to all three annotators in order to compute their inter-agreement. The first three tables belows summarize the results of the evaluation for each of the s, m, l levels, and the fourth one shows the inter-agreement percentage.",
"cite_spans": [
{
"start": 510,
"end": 538,
"text": "(Chklovski and Pantel, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pair to Pair paraphrase evaluation",
"sec_num": "5.1."
},
{
"text": "There are 10,000 sentences that contain the chosen verbs that we extracted from Gigaword. For this sentences we can compare the accuracy of the whole text, not only of the verb. That is we can compare the effectiveness of paraphrase replacement in a specified context. Only our approach and PPDB can be compared in this experiment, as for DIRT and word2vec there is no immediate way to carry it out as this approaches do not contain contextual information. We considered the large level in order to maximize the chance that a given sentence matches an existing paraphrase in PPDB. From the 10,000 sentences we selected a random sample of 1,500 sentences and we gave them to the same three annotators, that is 500 for each. 90 sentences were common to all three annotators in order to observe their inter-agreement. The pp approach produced the correct replacement in the sentences in 46% of the sentences, while a suitable paraphrase was found in ppdb only in 19%. The inter agreement was 76%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual Paraphrasing",
"sec_num": "5.2."
},
{
"text": "We have compiled in a unsupervised way a large resource of pattern paraphrases that is available for academic purposes. A pattern is defined by a verb and a set of features that can occur on a specified syntactic position. A pattern matches some constituents in a given sentence by instantiating the features with corresponding words. The pattern is paraphrased by other patterns which do not necessarily assign the same syntactic roles to those constituents. Each pattern corresponds to a set of specific paraphrases which involve different other verbs. There are a few directions that could be exploited in order to increase the quality of this resource. For the moment, there are no paraphrases for noun phrases, including the ones that may contain adjectival determiners. This is a direction that we would like to exploit further. The adverbs were not taken into account when we extracted dependency paths, but they may play a role in the determination of certain pattern paraphrases. Another direction for improvement is to fill the gaps in the pattern set for certain verbs, that is, the algorithms acknowledges that some patterns have not been found yet, but their instances are present in text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Further Work",
"sec_num": "6."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions",
"authors": [
{
"first": "C",
"middle": [],
"last": "Butnariu",
"suffix": ""
},
{
"first": "S",
"middle": [
"N"
],
"last": "Kim",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions",
"volume": "",
"issue": "",
"pages": "100--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Butnariu, C., Kim, S. N., Nakov, P.,\u00d3 S\u00e9aghdha, D., Sz- pakowicz, S., and Veale, T. (2009). Semeval-2010 task 9: The interpretation of noun compounds using para- phrasing verbs and prepositions. In Proceedings of the Workshop on Semantic Evaluations: Recent Achieve- ments and Future Directions, pages 100-105. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Verbocean: Mining the web for fine-grained semantic verb relations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Chklovski",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "4",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chklovski, T. and Pantel, P. (2004). Verbocean: Min- ing the web for fine-grained semantic verb relations. In EMNLP, volume 4, pages 33-40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Ppdb: The paraphrase database",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganitkevitch, J., Van Durme, B., and Callison-Burch, C. (2013). Ppdb: The paraphrase database. In HLT- NAACL, pages 758-764.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Comlex syntax: Building a computational lexicon",
"authors": [
{
"first": "R",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Macleod",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Meyers",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "268--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grishman, R., Macleod, C., and Meyers, A. (1994). Comlex syntax: Building a computational lexicon. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 268-272. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inducing example-based semantic frames from a massive amount of verb uses",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kawara",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pederson",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawara, D., Pederson, D., Popescu, O., and Palmer, M. (2014). Inducing example-based semantic frames from a massive amount of verb uses. In Proceedings of the EACL 2014. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dirt@ sbt@ discovery of inference rules from text",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "323--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. and Pantel, P. (2001). Dirt@ sbt@ discovery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323-328. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "D",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "ICML",
"volume": "98",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. (1998). An information-theoretic definition of sim- ilarity. In ICML, volume 98, pages 296-304.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "task 2: Cross-lingual lexical substitution",
"authors": [
{
"first": "-",
"middle": [],
"last": "Semeval",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th international workshop on semantic evaluation",
"volume": "",
"issue": "",
"pages": "9--14",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semeval-2010 task 2: Cross-lingual lexical substitution. In Proceedings of the 5th international workshop on se- mantic evaluation, pages 9-14. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "G",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. A. (1995). Wordnet: a lexical database for en- glish. Communications of the ACM, 38(11):39-41.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Annotated gigaword",
"authors": [
{
"first": "C",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Webscale Knowledge Extraction",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Napoles, C., Gormley, M., and Van Durme, B. (2012). An- notated gigaword. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web- scale Knowledge Extraction, pages 95-100. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mapping wordnet to the sumo ontology",
"authors": [
{
"first": "I",
"middle": [],
"last": "Niles",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pease",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ieee international knowledge engineering conference",
"volume": "",
"issue": "",
"pages": "23--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Niles, I. and Pease, A. (2003). Mapping wordnet to the sumo ontology. In Proceedings of the ieee international knowledge engineering conference, pages 23-26.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sense discriminative patterns for word sense disambiguation",
"authors": [
{
"first": "O",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2007,
"venue": "SCAR workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Popescu, O. and Magnini, B. (2007). Sense discriminative patterns for word sense disambiguation. In SCAR work- shop, NODALIDA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Mapping cpa onto ontonotes",
"authors": [
{
"first": "O",
"middle": [],
"last": "Popescu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hacks",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 9th International Conference on Language Resources and Evaluation -LREC14",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Popescu, O., Palmer, M., and Hacks, P. (2014). Mapping cpa onto ontonotes. In Proceedings of the 9th Interna- tional Conference on Language Resources and Evalua- tion -LREC14.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning corpus pattern with finite state automata",
"authors": [
{
"first": "O",
"middle": [],
"last": "Popescu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the ICSC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Popescu, O. (2013). Learning corpus pattern with finite state automata. In Proceedings of the ICSC 2013.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "347--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semeval-2012 task 1: English lexical simplification. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Pro- ceedings of the Sixth International Workshop on Seman- tic Evaluation, pages 347-355. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semeval-2015 task 1: Paraphrase and semantic similarity in twitter (pit). In SemEval@ NAACL-HLT",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semeval-2015 task 1: Paraphrase and semantic similarity in twitter (pit). In SemEval@ NAACL-HLT, pages 1-11.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Examples of verbal paraphrases.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Example of dependency path extracted from GigaWord.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Dependency paths RNN.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Bootstrapping from mono-sense verbs towards ambiguous verbs.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Generalization of features.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "[human 1 ], dobj = [artif act], prepT O = [human 2 ] | v = buy, nsubj = [human 2 ], dobj = [artif act], prepF ROM = [human 1 ]) (we use indexes to distinguish same feature in different syntactic position)",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Precision m-level</td></tr></table>",
"text": "",
"num": null
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>interagremment</td></tr><tr><td>s level</td><td>97</td></tr><tr><td>m level</td><td>93</td></tr><tr><td>l level</td><td>85</td></tr><tr><td>: Precision l-level</td><td/></tr></table>",
"text": "",
"num": null
}
}
}
}