|
{ |
|
"paper_id": "C08-1029", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:24:25.222079Z" |
|
}, |
|
"title": "A Probabilistic Model for Measuring Grammaticality and Similarity of Automatically Generated Paraphrases of Predicate Phrases", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Nagoya University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Nagoya University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The most critical issue in generating and recognizing paraphrases is development of wide-coverage paraphrase knowledge. Previous work on paraphrase acquisition has collected lexicalized pairs of expressions; however, the results do not ensure full coverage of the various paraphrase phenomena. This paper focuses on productive paraphrases realized by general transformation patterns, and addresses the issues in generating instances of phrasal paraphrases with those patterns. Our probabilistic model computes how two phrases are likely to be correct paraphrases. The model consists of two components: (i) a structured N-gram language model that ensures grammaticality and (ii) a distributional similarity measure for estimating semantic equivalence and substitutability.", |
|
"pdf_parse": { |
|
"paper_id": "C08-1029", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The most critical issue in generating and recognizing paraphrases is development of wide-coverage paraphrase knowledge. Previous work on paraphrase acquisition has collected lexicalized pairs of expressions; however, the results do not ensure full coverage of the various paraphrase phenomena. This paper focuses on productive paraphrases realized by general transformation patterns, and addresses the issues in generating instances of phrasal paraphrases with those patterns. Our probabilistic model computes how two phrases are likely to be correct paraphrases. The model consists of two components: (i) a structured N-gram language model that ensures grammaticality and (ii) a distributional similarity measure for estimating semantic equivalence and substitutability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In many languages, a concept can be expressed with several different linguistic expressions. Handling such synonymous expressions in a given language, i.e., paraphrases, is one of the key issues in a broad range of natural language processing tasks. For example, the technology for identifying paraphrases would play an important role in aggregating the wealth of uninhibited opinions about products and services that are available on the Web, from both the consumers and producers viewpoint. On the other hand, whenever we draw up a document, we always seek the most appropriate expression for conveying our ideas. In such a situation, a system that generates and proposes alternative expressions would be extremely beneficial.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "c Atsushi Fujita and Satoshi Sato, 2008 . Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license. Some rights reserved. http://creativecommons.org/licenses/by-nc-sa/3.0/ Most of previous work on generating and recognizing paraphrases has been dedicated to developing context-free paraphrase knowledge. It is typically represented with pairs of fragmentary expressions that satisfy the following conditions: Condition 1. Semantically equivalent Condition 2. Substitutable in some context", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 39, |
|
"text": "Fujita and Satoshi Sato, 2008", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The most critical issue in developing such knowledge is ensuring the coverage of the paraphrase phenomena. To attain this coverage, we have proposed a strategy for dividing paraphrase phenomena into the following two classes (Fujita et al., 2007) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 225, |
|
"end": 246, |
|
"text": "(Fujita et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) Non-productive (idiosyncratic) paraphrases a. burst into tears \u21d4 cried b. comfort \u21d4 console (Barzilay and McKeown, 2001) (2) Productive paraphrases a. be in our favor \u21d4 be favorable to us b. show a sharp decrease \u21d4 decrease sharply (Fujita et al., 2007 ) Typical examples of non-productive paraphrases are lexical paraphrases such as those shown in (1) and idiomatic paraphrases of literal phrases (e.g., \"kick the bucket\" \u21d4 \"die\"). Knowledge of this class of paraphrases should be stored statically, because they cannot be represented with abstract patterns. On the other hand, a productive paraphrase is one having a degree of regularity, as exhibited by the examples in (2). It is therefore reasonable to represent them with a set of general patterns such as those shown in (3). This attains a higher coverage, while keeping the knowledge manageable.", |
|
"cite_spans": [ |
|
{ |
|
"start": 96, |
|
"end": 124, |
|
"text": "(Barzilay and McKeown, 2001)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 256, |
|
"text": "(Fujita et al., 2007", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) a. (Harris, 1957) Various methods have been proposed to acquire paraphrase knowledge (these are reviewed in Section 2.1) where pairs of existing expres-sions are collected from the given corpus, taking the above two conditions into account. On the other hand, another issue arises when paraphrase knowledge is generated from the patterns for productive paraphrases such as shown in (3) by instantiating variables with specific words, namely, Condition 3. Both expressions are grammatical This paper proposes a probabilistic model for computing how likely a given pair of expressions satisfy the aforementioned three conditions. In particular, we focus on the post-generation assessment of automatically generated productive paraphrases of predicate phrases in Japanese.", |
|
"cite_spans": [ |
|
{ |
|
"start": 7, |
|
"end": 21, |
|
"text": "(Harris, 1957)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "N 1 V N 2 \u21d4 N 1 's V -ing of N 2 b. N 1 V N 2 \u21d4 N 2 be V -en by N 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the next section, we review previous approaches and models. The proposed probabilistic model is then presented in Section 3, where the grammaticality factor and similarity factor are derived from a conditional probability. In Section 4, the settings for and results of an empirical experiment are detailed. Finally, Section 5 summarizes this paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The task of automatically acquiring paraphrase knowledge is drawing the attention of an increasing number of researchers. They are tackling the problem of how precisely paraphrase knowledge can be acquired, although they have tended to notice that it is hard to acquire paraphrase knowledge that ensures full coverage of the various paraphrase phenomena from existing text corpora alone. To date, two streams of research have evolved: one acquires paraphrase knowledge from parallel/comparable corpora, while the other uses the regular corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work 2.1 Acquiring paraphrase knowledge", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Several alignment techniques have been proposed to acquire paraphrase knowledge from parallel/comparable corpora, imitating the techniques devised for machine translation. Multiple translations of the same text (Barzilay and McKeown, 2001) , corresponding articles from multiple news sources ( Barzilay and Lee, 2003; Quirk et al., 2004; Dolan et al., 2004) , and bilingual corpus (Bannard and Callison-Burch, 2005 ) have been utilized. Unfortunately, this approach produces only a low coverage because the size of the parallel/comparable corpora is limited.", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 239, |
|
"text": "(Barzilay and McKeown, 2001)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 317, |
|
"text": "Barzilay and Lee, 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 318, |
|
"end": 337, |
|
"text": "Quirk et al., 2004;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 338, |
|
"end": 357, |
|
"text": "Dolan et al., 2004)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 381, |
|
"end": 414, |
|
"text": "(Bannard and Callison-Burch, 2005", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work 2.1 Acquiring paraphrase knowledge", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the second stream, i.e., paraphrase acquisition from the regular corpus, the distributional hypothesis (Harris, 1968) has been adopted. The similarity of two expressions, computed from this hypothesis, is called distributional similarity. The essence of this measure is summarized as follows: Feature representation: to compute the similarity, given expressions are first mapped to certain feature representations. Expressions that co-occur with the given expression, such as adjacent words (Barzilay and McKeown, 2001; Lin and Pantel, 2001) , and modifiers/modifiees (Yamamoto, 2002; Weeds et al., 2005) , have so far been examined. Feature weighting: to precisely compute the similarity, the weight for each feature is adjusted. Point-wise mutual information (Lin, 1998) and Relative Feature Focus (Geffet and Dagan, 2004) are well-known examples. Feature comparison measures: to convert two feature sets into a scalar value, several measures have been proposed, such as cosine, Lin's measure (Lin, 1998) , Kullback-Leibler (KL) divergence and its variants. While most researchers extract fully-lexicalized pairs of words or word sequences only, two algorithms collect template-like knowledge using dependency parsers. DIRT (Lin and Pantel, 2001) collects pairs of paths in dependency parses that connect two nominal entities. TEASE (Szpektor et al., 2004) discovers dependency sub-parses from the Web, based on sets of representative entities for a given lexical item. The output of these systems contains the variable slots as shown in (4). (4) a. X wrote Y \u21d4 X is the author of Y b. X solves Y \u21d4 X deals with Y (Lin and Pantel, 2001 ) The knowledge in (4) falls between that in (1), which is fully lexicalized, and that in (3), which is almost fully abstracted. As a way of enriching such a template-like knowledge, Pantel et al. (2007) proposed the notion of inferential selectional preference and collected expressions that would fill those slots.", |
|
"cite_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 120, |
|
"text": "(Harris, 1968)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 522, |
|
"text": "(Barzilay and McKeown, 2001;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 544, |
|
"text": "Lin and Pantel, 2001)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 571, |
|
"end": 587, |
|
"text": "(Yamamoto, 2002;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 588, |
|
"end": 607, |
|
"text": "Weeds et al., 2005)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 775, |
|
"text": "(Lin, 1998)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 803, |
|
"end": 827, |
|
"text": "(Geffet and Dagan, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1009, |
|
"text": "(Lin, 1998)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1229, |
|
"end": 1251, |
|
"text": "(Lin and Pantel, 2001)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1338, |
|
"end": 1361, |
|
"text": "(Szpektor et al., 2004)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 1619, |
|
"end": 1640, |
|
"text": "(Lin and Pantel, 2001", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1824, |
|
"end": 1844, |
|
"text": "Pantel et al. (2007)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work 2.1 Acquiring paraphrase knowledge", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As mentioned in Section 1, the aim of the studies reviewed here is to collect paraphrase knowledge. Thus, they need not to take the grammaticality of expressions into account.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous work 2.1 Acquiring paraphrase knowledge", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Representing productive paraphrases with a set of general patterns makes them maintainable and attains a higher coverage of the paraphrase phenomena. From the transformation grammar (Har-ris, 1957) , this approach has been adopted by many researchers (Mel'\u010duk and Polgu\u00e8re, 1987; Jacquemin, 1999; Fujita et al., 2007) . An important issue arises when such a pattern is used to generate instances of paraphrases by replacing its variables with specific words. This involves assessing the grammaticality of two expressions in addition to their semantic equivalence and substitutability.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 197, |
|
"text": "(Har-ris, 1957)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 279, |
|
"text": "(Mel'\u010duk and Polgu\u00e8re, 1987;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 296, |
|
"text": "Jacquemin, 1999;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 297, |
|
"end": 317, |
|
"text": "Fujita et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating paraphrase instances", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "As a post-generation assessment of automatically generated productive paraphrases, we have applied distributional similarity measures (Fujita and Sato, 2008) . Our findings from a series of empirical experiments are summarized as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 157, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating paraphrase instances", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "\u2022 Search engines are useful for retrieving the contextual features of predicate phrases despite some limitations (Kilgarriff, 2007) . \u2022 Distributional similarity measures produce a tolerable level of performance. The grammaticality of a phrase, however, is merely assessed by issuing the phrase as a query to a commercial search engine. Although a more frequent expression is more grammatical, the length bias should also be considered in the assessment. Quirk et al. (2004) built a paraphrase generation model from a monolingual comparable corpus based on a statistical machine translation framework, where the language model assesses the grammaticality of the translations, i.e., generated expressions. The translation model, however, is not suitable for generating productive paraphrases, because it learns word alignments at the surface level. To cover all of the productive paraphrases, we require an non-real comparable corpus in which all instances of productive paraphrases have a chance of being aligned. Furthermore, as the translation model optimizes the word alignment at the sentence level, the substitutability of the aligned word sequences cannot be explicitly guaranteed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 131, |
|
"text": "(Kilgarriff, 2007)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 455, |
|
"end": 474, |
|
"text": "Quirk et al. (2004)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generating paraphrase instances", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "To date, no model has been established that takes into account all of the three aforementioned conditions. With the ultimate aim of building an ideal model, this section overviews the characteristics and drawbacks of the four existing measures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing measures for paraphrases", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Lin's measure Lin (1998) proposed a symmetrical measure:", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 24, |
|
"text": "Lin (1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing measures for paraphrases", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Par Lin (s \u21d4 t) = f \u2208Fs\u2229Ft (w(s, f ) + w(t, f )) f \u2208Fs w(s, f ) + f \u2208Ft w(t, f ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing measures for paraphrases", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "where F s and F t denote sets of features with positive weights for words s and t, respectively. Although this measure has been widely cited and has so far exhibited good performance, its symmetry seems unnatural. Moreover, it may not work well for dealing with general predicate phrases because it is hard to enumerate all phrases to determine the weights of features w(\u2022, f). We thus simply adopted the co-occurrence frequency of the phrase and the feature as in (Fujita and Sato, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 488, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Existing measures for paraphrases", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The skew divergence, a variant of KL divergence, was proposed in (Lee, 1999) based on an insight: the substitutability of one word for another need not be symmetrical. The divergence is given by the following formula:", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 76, |
|
"text": "(Lee, 1999)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skew divergence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "d skew (t, s) = D (P s \u03b1P t + (1 \u2212 \u03b1)P s ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skew divergence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where P s and P t are the probability distributions of features for the given original and substituted words s and t, respectively. 0 \u2264 \u03b1 \u2264 1 is a parameter for approximating KL divergence D. The score can be recast into a similarity score via, for example, the following function (Fujita and Sato, 2008) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 281, |
|
"end": 304, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skew divergence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Par skew (s\u21d2t) = exp(\u2212d skew (t, s)) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skew divergence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This measure offers an advantage: the weight for each feature is determined theoretically. However, the optimization of \u03b1 is difficult because it varies according to the task and even the data size (confidence of probability distributions).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Skew divergence", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Bannard and Callison-Burch (2005) proposed a probabilistic model for acquiring phrasal paraphrases 1 . The likelihood of t as a paraphrase of the given phrase s is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation-based conditional probability", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P (t|s) = f \u2208tr (s)\u2229tr (t) P (t|f )P (f |s),", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation-based conditional probability", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where tr (e) stands for a set of foreign language phrases that are aligned with e in the given parallel corpus. Parameters P (t|f ) and P (f |s) are also estimated using the given parallel corpus. A largescale parallel corpus may enable us to precisely acquire a large amount of paraphrase knowledge. It is not feasible, however, to build (or obtain) a parallel corpus in which all the instances of productive paraphrases are translated to the same expression in the other side of language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Translation-based conditional probability", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Recall that our aim is to establish a measure that computes the likelihood of a given pair of automatically generated predicate phrases satisfying the following three conditions: Condition 1. Semantically equivalent Condition 2. Substitutable in some context Condition 3. Both expressions are grammatical Based on the characteristics of the existing measures reviewed in Section 2.3, we propose a probabilistic model. Let s and t be the source and target predicate phrase, respectively. Assuming that s is grammatical, the degree to which the above conditions are satisfied is formalized as a conditional probability P (t|s), as in (Bannard and Callison-Burch, 2005) . Then, assuming that s and t are paradigmatic (i.e., paraphrases) and thus do not cooccur, the proposed model is derived as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 632, |
|
"end": 666, |
|
"text": "(Bannard and Callison-Burch, 2005)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P (t|s) = f \u2208F P (t|f )P (f |s) = f \u2208F P (f |t)P (t) P (f ) P (f |s) = P (t) f \u2208F P (f |t)P (f |s) P (f ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where F denotes a set of features. The first factor P (t) is called the grammaticality factor because it quantifies the degree to which condition 3 is satisfied, except that we assume that the given s is grammatical. The second factor", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "f \u2208F P (f |t)P (f |s) P (f ) (Sim(s, t), hereafter)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ", on the other hand, is called the similarity factor because it approximates the degree to which conditions 1 and 2 are satisfied by summing up the overlap of the features of two expressions s and t.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The characteristics and advantages of the proposed model are summarized as follows: 1) Asymmetric. 2) Grammaticality is assessed by P (t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "3) No heuristic is introduced. As the skew divergence, the weight of the features can be simply estimated as conditional probabilities P (f |t) and P (f |s) and marginal probability P (f ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "4) There is no need to enumerate all the phrases. s and t are merely the given conditions. The following subsections describe each factor.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Formulation with conditional probability", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The factor P (t) quantifies how the phrase t is grammatical using statistical language model. Unlike English, in Japanese, predicates such as verbs and adjectives do not necessarily determine the order of their arguments, although they have some preference. For example, both of the two sentences in (5) are grammatical. This motivates us to use structured N -gram language models (Habash, 2004) . Given a phrase t, its grammaticality P (t) is formulated as follows, assuming a (N \u2212 1)-th order Markov process for generating its dependency structure T (t):", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 395, |
|
"text": "(Habash, 2004)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P (t) = i=1...|T (t)| P d c i |d 1 i , d 2 i , . . . , d N \u22121 i 1/|T (t)| ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where |T (t)| stands for the number of nodes in T (t). To ignore the length bias of the target phrase, a normalization factor 1/|T (t)| is introduced. Then, a concrete definition of the nodes in the dependency structure is given. Widely-used Japanese dependency parsers such as CaboCha 2 and KNP 3 consider a sequence of words as a node called a \"bunsetsu\" that consists of at least one content word followed by a sequence of function words if any. The hyphenated word sequences in (6) exemplify those nodes. meeting-DAT-TOP to come-NEG-must He will surely not come to today's meeting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "As bunsetsu can be quite long, involving more than ten words, regarding it as a node makes the model complex. Therefore, we compare the following two versions of dependency structures whose nodes are smaller than bunsetsu. MDS: Morpheme-based dependency structure (Takahashi et al., 2001 ) regards a morpheme as a node. MDS of sentence (6) is shown in Figure 1 . CFDS: The node of a content-function-based dependency structure is either a sequence of content words or of function words. CFDS of sentence (6) is shown Figure 2 . Structured N -gram language models were created from 15 years of Mainichi newspaper articles 4 using a dependency parser Cabocha, with N being varied from 1 to 3. Then, the 3-gram conditional probability", |
|
"cite_spans": [ |
|
{ |
|
"start": 264, |
|
"end": 287, |
|
"text": "(Takahashi et al., 2001", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 352, |
|
"end": 360, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF3" |
|
}, |
|
{ |
|
"start": 517, |
|
"end": 525, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "= A d v N k y o u = A d v N k i t t o = A d v k i t t o = A d v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P d (c i |d 1 i , d 2 i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "is given by the linear interpolation of those three models as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P d (c i |d 1 i , d 2 i ) = \u03bb 3 P ML (c i |d 1 i , d 2 i ) +\u03bb 2 P ML (c i |d 1 i ) +\u03bb 1 P ML (c i ), s.t. j \u03bb j = 1,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where mixture weights \u03bb j are selected via an EM algorithm using development data 5 that has not been used for estimating P ML .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grammaticality factor", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The similarity factor Sim(s, t) quantifies how two phrases s and t are similar by comparing two sets of contextual features f \u2208 F for s and t. Figure 2 : CFDS of sentence (6).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 151, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We employ the following two types of feature sets, which we have examined in our previous work (Fujita and Sato, 2008) , where a feature f consists of an expression e and a relation r: BOW: A pair of phrases is likely to be semantically similar, if the distributions of the words surrounding the phrases is similar. The relation set R BOW contains only \"cooccur in the same sentence\". MOD: A pair of phrases is likely to be substitutable with each other, provided they share a number of instances of modifiers and modifiees: the set of the relation R MOD consists of two relations \"modifier\" and \"modifiee\". Conditional probability distributions P (f |s) and P (f |t) are estimated using a Web search engine as in (Fujita and Sato, 2008) . Given a phrase p, snippets of Web pages are firstly obtained via Yahoo API 6 by issuing p as a query. The maximum number of snippets is set to 1,000. Then, the features of the phrase are retrieved from those snippets using a morphological analyzer ChaSen 7 and CaboCha. Finally, the conditional probability distribution P (f |p) is estimated as follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 118, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 714, |
|
"end": 737, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "P (f |p) = P ( r, e |p) = freq sni (p, r, e) r \u2208R e freq sni (p, r , e ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where freq sni (p, r, e) stands for the frequency of the expression e appealing with the phrase p in relation r within the snippets for p. The weight for features P (f ) is estimated using a static corpus based on the following equation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "P (f ) = P ( r, e ) = freq cp (r, e) r \u2208R e freq cp (r , e ) ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "where freq cp (r, e) indicates the frequency of the expression e appearing with something in relation r within the given corpus. Two different sorts of corpora are separately used to build two variations of P (f ). The one is Mainichi, which is used for building structured N -gram language models in Section 3.2, while the other is a huge corpus consisting of 470M sentences collected from the Web (Kawahara and Kurohashi, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 399, |
|
"end": 429, |
|
"text": "(Kawahara and Kurohashi, 2006)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Similarity factor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We conducted an empirical experiment to evaluate the proposed model using the test suite developed in (Fujita and Sato, 2008) . The test suite consists of 176,541 pairs of paraphrase candidates that are automatically generated using a pattern-based paraphrase generation system (Fujita et al., 2007) for 4,002 relatively high-frequency phrases sampled from a newspaper corpus 8 . To evaluate the system from a generation viewpoint, i.e., how well a system can rank a correct candidate first, we extracted paraphrase candidates for 200 randomly sampled source phrases from the test suite. Table 1 shows the statistics of the test data. The \"All-Yield\" column shows that the number of candidates for a source phrase varies considerably, which implies that the data contains cases that have various difficulties. While the average number of candidates for each source phrase was 48.3 (the maximum was 186), it was dramatically reduced through extracting features for each source and candidate paraphrase from Web snippets: to 5.2 with BOW and to 4.8 with MOD. This suggests that a large number of spurious phrases were generated but discarded by going to the Web, and the task was significantly simplified.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 125, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 299, |
|
"text": "(Fujita et al., 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 588, |
|
"end": 595, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Through this experiment, we evaluated several versions of the proposed model to answer the following questions: Q1. Is the proposed model superior to existing measures in practice? Par Lin and Par skew are regarded as being the baseline. Q2. Which language model performs better at estimating P (t)? MDS and CFDS are compared. Q3. Which corpus performs better at estimating P (f )? The advantage of Kawahara's huge 8 The grammaticality of the source phrases are guaranteed. dition to BOW and MOD, the harmonic mean of the scores derived from BOW and MOD is examined (referred to as HAR). Q5. Can the quality of P (f |s) and P (f |t) be improved by using a larger number of snippets? As the maximum number of snippets (N S ), we compared 500 and 1,000.", |
|
"cite_spans": [ |
|
{ |
|
"start": 415, |
|
"end": 416, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Questions", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Two assessors were asked to judge paraphrase candidates that are ranked first by either of the above models if each candidate satisfies each of the three conditions. The results for all the above options are summarized in Table 2 , where the strict precision is calculated based on those cases that gain two positive judgements, while the lenient precision is for at least one positive judgement. A1: Our greatest concern is the actual performance of our probabilistic model. However, no variation of the proposed model could outperform the existing models (Par Lin and Par skew ) that only assess similarity. Furthermore, McNemer's test with p < 0.05 revealed that the precisions of all the models, except the combination of CFDS for P (t) and Mainichi for P (f ), were significantly worse than those of the best models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 229, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "To clarify the cause of these disappointing results, we investigated the performance of each factor. Table 3 shows how well the grammaticality factors select a grammatical phrase, while Table 4 illustrates how well the similarity factors rank a correct paraphrase first. As shown in these tables, neither factor performed the task well, although combinations produced a slight improvement in performance. A detailed discussion is given below in A2 for the grammaticality factors, and in A3-A5 for the similarity factors.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 108, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 193, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A2: Comparisons between MDS and CFDS revealed that CFDS always produced better results than MDS not only when used for measuring grammaticality (Table 3) , but also when used as a component of the entire model (Table 2) . This result is quite natural because MDS cannot verify the collocation between content words in those cases where a number of function words appear between them. On the other hand, CFDS with N = 3 could verify this as a result of treating the sequence of function words as a single node.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 153, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 210, |
|
"end": 219, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "As mentioned in A1, however, a more sophisticated language model must enhance the proposed model. One way of obtaining a suitable granularity of nodes is to introduce latent classes, such as the Semi-Markov class model (Okanohara and Tsujii, 2007) . The existence of many orthographic variants of both the content and function words may prevent us from accurately estimating the grammaticality. We plan to normalize these variations by using several existing resources such as the Japanese functional expression dictionary (Matsuyoshi, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 247, |
|
"text": "(Okanohara and Tsujii, 2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 541, |
|
"text": "(Matsuyoshi, 2008)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A3: Contrary to our expectations, the huge Web corpus did not offer any advantage over the newspaper corpus: Mainichi always produced better results than WebCP when it was combined with the grammaticality factor or when MOD was used.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We can speculate that morphological and dependency parsers produce errors when features are extracted, because they are tuned to newspaper articles. Likewise, P (f |s) and P (f |t) may involve noise even though they are estimated using rela-tively clean parts of Web text that are retrieved by querying phrase candidates.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A4: For Par Lin and Par skew , different sets of features led to consistent results with our previous experiments in (Fujita and Sato, 2008) , i.e., BOW < MOD HAR. On the other hand, for the proposed models, MOD and HAR led to only small or sometimes negative effects. When the similarity factor was used alone, however, these features beat BOW. Furthermore, the impact of combining BOW and MOD into HAR was significant.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 140, |
|
"text": "(Fujita and Sato, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Given this tendency, it is expected that the grammaticality factor might be excessively emphasized. Our probability model was derived straightforwardly from the conditional probability P (t|s); however, the combination of the two factors should be tuned according to their implementation. A5: Finally, the influence of the number of Web snippets was analyzed; no significant difference was observed. This is because we could retrieve more than 500 snippets for only 172 pairs of expressions among our test samples. As it is time-consuming to obtain a large number of Web snippets, the trade-off between the number of Web snippets and the performance should be investigated further, although the quality of the Web snippets and what appears at the top of the search results will vary according to several factors other than linguistic ones.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A pair of expressions qualifies as paraphrases iff they are semantically equivalent, substitutable in some context, and grammatical. In cases where paraphrase knowledge is represented with abstract patterns to attain a high coverage of the paraphrase phenomena, we should assess not only the first and second conditions, but also the third condition.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper, we proposed a probabilistic model for computing how two phrases are likely to be paraphrases. The proposed model consists of two components: (i) a structured N -gram language model that ensures grammaticality and (ii) a distributional similarity measure for estimating semantic equivalence and substitutability between two phrases. Through an experiment, we empirically evaluated the performance of the proposed model and analyzed the characteristics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Future work includes building a more sophisticated structured language model to improve the performance of the proposed model and conducting an experiment on template-like paraphrase knowledge for other than productive paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In their definition, the term \"phrase\" is a sequence of words, while in this paper it designates the subtrees governed by predicates(Fujita et al., 2007).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://chasen.org/\u02dctaku/software/cabocha/ 3 http://nlp.kuee.kyoto-u.ac.jp/nl-resource/knp.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://developer.yahoo.co.jp/search/ 7 http://chasen.naist.jp/hiki/ChaSen/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Paraphrasing with bilingual parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "597--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bannard, Colin and Chris Callison-Burch. 2005. Paraphras- ing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 597-604.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Extracting paraphrases from a parallel corpus", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "50--57", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barzilay, Regina and Kathleen R. McKeown. 2001. Extract- ing paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 50-57.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Learning to paraphrase: an unsupervised approach using multiplesequence alignment", |
|
"authors": [ |
|
{ |
|
"first": "Regina", |
|
"middle": [], |
|
"last": "Barzilay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "16--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barzilay, Regina and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple- sequence alignment. In Proceedings of the 2003 Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 16-23.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "350--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dolan, Bill, Chris Quirk, and Chris Brockett. 2004. Unsu- pervised construction of large paraphrase corpora: exploit- ing massively parallel news sources. In Proceedings of the 20th International Conference on Computational Linguis- tics (COLING), pages 350-356.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A compositional approach toward dynamic phrasal thesaurus", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuhei", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naoki", |
|
"middle": [], |
|
"last": "Kato", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing (WTEP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "151--158", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fujita, Atsushi, Shuhei Kato, Naoki Kato, and Satoshi Sato. 2007. A compositional approach toward dynamic phrasal thesaurus. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing (WTEP), pages 151-158.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Computing paraphrasability of syntactic variants using Web snippets", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Satoshi", |
|
"middle": [], |
|
"last": "Sato", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "537--544", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fujita, Atsushi and Satoshi Sato. 2008. Computing para- phrasability of syntactic variants using Web snippets. In Proceedings of the 3rd International Joint Conference on Natural Language Processing (IJCNLP), pages 537-544.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Feature vector quality and distributional similarity", |
|
"authors": [ |
|
{ |
|
"first": "Maayan", |
|
"middle": [], |
|
"last": "Geffet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 20th International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "247--253", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geffet, Maayan and Ido Dagan. 2004. Feature vector qual- ity and distributional similarity. In Proceedings of the 20th International Conference on Computational Linguis- tics (COLING), pages 247-253.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "The use of a structural N-gram language model in generation-heavy hybrid machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Nizar", |
|
"middle": [], |
|
"last": "Habash", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 3rd International Natural Language Generation Conference (INLG)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "61--69", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Habash, Nizar. 2004. The use of a structural N-gram lan- guage model in generation-heavy hybrid machine transla- tion. In Proceedings of the 3rd International Natural Lan- guage Generation Conference (INLG), pages 61-69.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Co-occurrence and transformation in linguistic structure", |
|
"authors": [ |
|
{ |
|
"first": "Zellig", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1957, |
|
"venue": "Language", |
|
"volume": "33", |
|
"issue": "3", |
|
"pages": "283--340", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harris, Zellig. 1957. Co-occurrence and transformation in linguistic structure. Language, 33(3):283-340.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mathematical structures of language", |
|
"authors": [ |
|
{ |
|
"first": "Zellig", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1968, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Harris, Zellig. 1968. Mathematical structures of language. John Wiley & Sons.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Syntagmatic and paradigmatic representations of term variation", |
|
"authors": [ |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Jacquemin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "341--348", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacquemin, Christian. 1999. Syntagmatic and paradigmatic representations of term variation. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL), pages 341-348.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Case frame compilation from the Web using high-performance computing", |
|
"authors": [ |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sadao", |
|
"middle": [], |
|
"last": "Kurohashi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kawahara, Daisuke and Sadao Kurohashi. 2006. Case frame compilation from the Web using high-performance com- puting. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC).", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Googleology is bad science", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "147--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarriff, Adam. 2007. Googleology is bad science. Com- putational Linguistics, 33(1):147-151.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Measures of distributional similarity", |
|
"authors": [ |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lee, Lillian. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL), pages 25-32.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Automatic retrieval and clustering of similar words", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "768--774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguis- tics (COLING-ACL), pages 768-774.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Discovery of inference rules for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Natural Language Engineering", |
|
"volume": "7", |
|
"issue": "4", |
|
"pages": "343--360", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, Dekang and Patrick Pantel. 2001. Discovery of infer- ence rules for question answering. Natural Language En- gineering, 7(4):343-360.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Hierarchically organized dictionary of Japanese functional expressions: design, compilation and application", |
|
"authors": [ |
|
{ |
|
"first": "Suguru", |
|
"middle": [], |
|
"last": "Matsuyoshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matsuyoshi, Suguru. 2008. Hierarchically organized dictio- nary of Japanese functional expressions: design, compi- lation and application. Ph.D. thesis, Graduate School of Informatics, Kyoto University.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A formal lexicon in meaning-text theory (or how to do lexica with words)", |
|
"authors": [ |
|
{ |
|
"first": "Igor", |
|
"middle": [], |
|
"last": "Mel'\u010duk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alain", |
|
"middle": [], |
|
"last": "Polgu\u00e8re", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Computational Linguistics", |
|
"volume": "13", |
|
"issue": "3-4", |
|
"pages": "261--275", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mel'\u010duk, Igor and Alain Polgu\u00e8re. 1987. A formal lexicon in meaning-text theory (or how to do lexica with words). Computational Linguistics, 13(3-4):261-275.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A discriminative language model with pseudo-negative samples", |
|
"authors": [ |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Okanohara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "73--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Okanohara, Daisuke and Jun'ichi Tsujii. 2007. A discrimi- native language model with pseudo-negative samples. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 73-80.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "ISP: Learning inferential selectional preferences", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Bhagat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonaventura", |
|
"middle": [], |
|
"last": "Coppola", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Chklovski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "564--571", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pantel, Patrick, Rahul Bhagat, Bonaventura Coppola, Timo- thy Chklovski, and Eduard Hovy. 2007. ISP: Learning inferential selectional preferences. In Proceedings of Hu- man Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics (NAACL-HLT), pages 564-571.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Monolingual machine translation for paraphrase generation", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "142--149", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Quirk, Chris, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase genera- tion. In Proceedings of the 2004 Conference on Empiri- cal Methods in Natural Language Processing (EMNLP), pages 142-149.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Scaling Web-based acquisition of entailment relations", |
|
"authors": [ |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Szpektor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hristo", |
|
"middle": [], |
|
"last": "Tanev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bonaventura", |
|
"middle": [], |
|
"last": "Coppola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--48", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Szpektor, Idan, Hristo Tanev, Ido Dagan, and Bonaventura Coppola. 2004. Scaling Web-based acquisition of entail- ment relations. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 41-48.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "KURA: a transfer-based lexico-structural paraphrasing engine", |
|
"authors": [ |
|
{ |
|
"first": "Tetsuro", |
|
"middle": [], |
|
"last": "Takahashi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomoya", |
|
"middle": [], |
|
"last": "Iwakura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryu", |
|
"middle": [], |
|
"last": "Iida", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 6th Natural Language Processing Pacific Rim Symposium (NLPRS) Workshop on Automatic Paraphrasing: Theories and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "37--46", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takahashi, Tetsuro, Tomoya Iwakura, Ryu Iida, Atsushi Fu- jita, and Kentaro Inui. 2001. KURA: a transfer-based lexico-structural paraphrasing engine. In Proceedings of the 6th Natural Language Processing Pacific Rim Sym- posium (NLPRS) Workshop on Automatic Paraphrasing: Theories and Applications, pages 37-46.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "The distributional similarity of sub-parses", |
|
"authors": [ |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Weeds", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Weir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--12", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weeds, Julie, David Weir, and Bill Keller. 2005. The dis- tributional similarity of sub-parses. In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equiv- alence and Entailment, pages 7-12.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Acquisition of lexical paraphrases from texts", |
|
"authors": [ |
|
{ |
|
"first": "Kazuhide", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 2nd International Workshop on Computational Terminology (Com-puTerm)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "22--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yamamoto, Kazuhide. 2002. Acquisition of lexical para- phrases from texts. In Proceedings of the 2nd Interna- tional Workshop on Computational Terminology (Com- puTerm), pages 22-28.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "(5) a. kare-wa pasuta-o hashi-de taberu. he-TOP pasta-ACC chopsticks-IMP to eat He eats pasta with chopsticks. b. kare-wa hashi-de pasuta-o taberu.he-TOP chopsticks-IMP pasta-ACC to eat He eats pasta with chopsticks.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"text": "d j i denotes the direct ancestor node of the i-th node c i , where j is the distance from c i ; for example, d 1 i and d 2 i are the parent and grandparent nodes of c i , respectively.", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"text": "MDS of sentence (6).", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td>Source</td><td>All</td><td>BOW</td><td>MOD</td></tr><tr><td>Phrase type</td><td>Ph.</td><td colspan=\"3\">Ph. Yield Ph. Yield Ph. Yield</td></tr><tr><td>N :C:V</td><td>18</td><td>57 3.2</td><td colspan=\"2\">54 3.0 54 3.0</td></tr><tr><td>N1:N2:C:V</td><td colspan=\"4\">57 4,596 80.6 594 10.4 551 9.7</td></tr><tr><td>N :C:V1:V2</td><td colspan=\"4\">54 4,767 88.3 255 4.7 232 4.3</td></tr><tr><td>N :C:Adv:V</td><td>16</td><td>51 3.2</td><td colspan=\"2\">39 2.4 38 2.4</td></tr><tr><td>Adj:N :C:V</td><td>2</td><td>8 4.0</td><td>5 2.5</td><td>5 2.5</td></tr><tr><td>N :C:Adj</td><td colspan=\"2\">53 173 3.3</td><td colspan=\"2\">86 1.6 83 1.6</td></tr><tr><td>Total</td><td colspan=\"4\">200 9,652 48.3 1,033 5.2 963 4.8</td></tr><tr><td colspan=\"5\">corpus (WebCP) over Mainichi is evaluated.</td></tr><tr><td colspan=\"5\">Q4. Which set of features performs better? In ad-</td></tr></table>", |
|
"num": null, |
|
"text": "Statistics of test data (\"Ph.\": # of phrases).", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>NS = 500</td><td/><td>Strict</td><td/><td/><td>Lenient</td><td/></tr><tr><td>Model</td><td>BOW</td><td>MOD</td><td>HAR</td><td>BOW</td><td>MOD</td><td>HAR</td></tr><tr><td>ParLin</td><td colspan=\"6\">78 (39%) 88 (44%) 87 (44%) 116 (58%) 128 (64%) 127 (64%)</td></tr><tr><td>Par skew</td><td colspan=\"6\">81 (41%) 88 (44%) 88 (44%) 120 (60%) 127 (64%) 128 (64%)</td></tr><tr><td colspan=\"7\">MDS, Mainichi 72 (36%) 73 (37%) 76 (38%) 109 (55%) 112 (56%) 114 (57%)</td></tr><tr><td colspan=\"7\">MDS, WebCP 71 (36%) 73 (37%) 72 (36%) 108 (54%) 110 (55%) 113 (57%)</td></tr><tr><td colspan=\"7\">CFDS, Mainichi 79 (40%) 78 (39%) 83 (42%) 120 (60%) 119 (60%) 123 (62%)</td></tr><tr><td colspan=\"7\">CFDS, WebCP 79 (40%) 77 (39%) 80 (40%) 118 (59%) 116 (58%) 118 (59%)</td></tr><tr><td>NS = 1,000</td><td/><td>Strict</td><td/><td/><td>Lenient</td><td/></tr><tr><td>Model</td><td>BOW</td><td>MOD</td><td>HAR</td><td>BOW</td><td>MOD</td><td>HAR</td></tr><tr><td>ParLin</td><td colspan=\"6\">79 (40%) 88 (44%) 88 (44%) 116 (58%) 128 (64%) 129 (65%)</td></tr><tr><td>Par skew</td><td colspan=\"6\">84 (42%) 89 (45%) 89 (45%) 121 (61%) 128 (64%) 128 (64%)</td></tr><tr><td colspan=\"7\">MDS, Mainichi 72 (36%) 75 (38%) 76 (38%) 109 (55%) 114 (57%) 114 (57%)</td></tr><tr><td colspan=\"7\">MDS, WebCP 71 (36%) 74 (37%) 72 (36%) 109 (55%) 111 (56%) 113 (57%)</td></tr><tr><td colspan=\"7\">CFDS, Mainichi 79 (40%) 82 (41%) 83 (42%) 121 (61%) 121 (61%) 122 (61%)</td></tr><tr><td colspan=\"7\">CFDS, WebCP 79 (40%) 78 (39%) 79 (40%) 119 (60%) 116 (58%) 119 (60%)</td></tr></table>", |
|
"num": null, |
|
"text": "Precision for 200 test cases.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>Model</td><td>Strict</td><td>Lenient</td></tr><tr><td colspan=\"3\">MDS 104 (52%) 141 (71%)</td></tr><tr><td colspan=\"3\">CFDS 108 (54%) 142 (71%)</td></tr></table>", |
|
"num": null, |
|
"text": "Precision of measuring grammaticality.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td/><td/><td>Strict</td><td/><td/><td>Lenient</td><td/></tr><tr><td>NS Corpus</td><td>BOW</td><td>MOD</td><td>HAR</td><td>BOW</td><td>MOD</td><td>HAR</td></tr><tr><td colspan=\"7\">500 Mainichi 60 (30%) 68 (34%) 74 (37%) 98 (49%) 109 (55%) 114 (57%)</td></tr><tr><td colspan=\"7\">500 WebCP 57 (28%) 61 (31%) 74 (37%) 94 (47%) 99 (50%) 120 (60%)</td></tr><tr><td colspan=\"7\">1,000 Mainichi 57 (28%) 70 (35%) 74 (37%) 92 (46%) 113 (57%) 116 (58%)</td></tr><tr><td colspan=\"7\">1,000 WebCP 57 (28%) 60 (30%) 72 (36%) 93 (47%) 96 (48%) 116 (58%)</td></tr></table>", |
|
"num": null, |
|
"text": "Precision of similarity factors.", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |