Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E14-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:40:32.327497Z"
},
"title": "PARADIGM: Paraphrase Diagnostics through Grammar Matching",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Paraphrase evaluation is typically done either manually or through indirect, taskbased evaluation. We introduce an intrinsic evaluation PARADIGM which measures the goodness of paraphrase collections that are represented using synchronous grammars. We formulate two measures that evaluate these paraphrase grammars using gold standard sentential paraphrases drawn from a monolingual parallel corpus. The first measure calculates how often a paraphrase grammar is able to synchronously parse the sentence pairs in the corpus. The second measure enumerates paraphrase rules from the monolingual parallel corpus and calculates the overlap between this reference paraphrase collection and the paraphrase resource being evaluated. We demonstrate the use of these evaluation metrics on paraphrase collections derived from three different data types: multiple translations of classic French novels, comparable sentence pairs drawn from different newspapers, and bilingual parallel corpora. We show that PARADIGM correlates with human judgments more strongly than BLEU on a task-based evaluation of paraphrase quality.",
"pdf_parse": {
"paper_id": "E14-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Paraphrase evaluation is typically done either manually or through indirect, taskbased evaluation. We introduce an intrinsic evaluation PARADIGM which measures the goodness of paraphrase collections that are represented using synchronous grammars. We formulate two measures that evaluate these paraphrase grammars using gold standard sentential paraphrases drawn from a monolingual parallel corpus. The first measure calculates how often a paraphrase grammar is able to synchronously parse the sentence pairs in the corpus. The second measure enumerates paraphrase rules from the monolingual parallel corpus and calculates the overlap between this reference paraphrase collection and the paraphrase resource being evaluated. We demonstrate the use of these evaluation metrics on paraphrase collections derived from three different data types: multiple translations of classic French novels, comparable sentence pairs drawn from different newspapers, and bilingual parallel corpora. We show that PARADIGM correlates with human judgments more strongly than BLEU on a task-based evaluation of paraphrase quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Paraphrases are useful in a wide range of natural language processing applications. A variety of data-driven approaches have been proposed to generate paraphrase resources (see for a survey of these methods). Few objective metrics have been established to evaluate these resources. Instead, paraphrases are typically evaluated using subjective manual evaluation or through task-based evaluations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different researchers have used different criteria for manual evaluations. For example, Barzilay and McKeown (2001) evaluated their paraphrases by asking judges whether paraphrases were \"approximately conceptually equivalent. \" Ibrahim et al. (2003) asked judges whether their paraphrases were \"roughly interchangeable given the genre.\" Bannard and Callison-Burch (2005) replaced phrases with paraphrases in a number of sentences and asked judges whether the substitutions \"preserved meaning and remained grammatical.\" The results of these subjective evaluations are not easily reusable.",
"cite_spans": [
{
"start": 88,
"end": 115,
"text": "Barzilay and McKeown (2001)",
"ref_id": "BIBREF6"
},
{
"start": 226,
"end": 249,
"text": "\" Ibrahim et al. (2003)",
"ref_id": "BIBREF24"
},
{
"start": 337,
"end": 370,
"text": "Bannard and Callison-Burch (2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other researchers have evaluated their paraphrases through task-based evaluations. Lin and Pantel (2001) measured their potential impact on question-answering. Cohn and Lapata (2007) evaluate their applicability in the text-to-text generation task of sentence compression. Zhao et al. (2009) use them to perform sentence compression and simplification and to compute sentence similarity. Several researchers have demonstrated that paraphrases can improve machine translation evaluation (c.f. Kauchak and Barzilay (2006) , Zhou et al. (2006) , Madnani (2010) and Snover et al. (2010) ).",
"cite_spans": [
{
"start": 83,
"end": 104,
"text": "Lin and Pantel (2001)",
"ref_id": "BIBREF30"
},
{
"start": 160,
"end": 182,
"text": "Cohn and Lapata (2007)",
"ref_id": "BIBREF11"
},
{
"start": 273,
"end": 291,
"text": "Zhao et al. (2009)",
"ref_id": "BIBREF47"
},
{
"start": 492,
"end": 519,
"text": "Kauchak and Barzilay (2006)",
"ref_id": "BIBREF25"
},
{
"start": 522,
"end": 540,
"text": "Zhou et al. (2006)",
"ref_id": "BIBREF48"
},
{
"start": 543,
"end": 557,
"text": "Madnani (2010)",
"ref_id": "BIBREF34"
},
{
"start": 562,
"end": 582,
"text": "Snover et al. (2010)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce an automatic evaluation metric called PARADIGM, PARAphrase DIagnostics through Grammar Matching. This metric evaluates paraphrase collections that are represented using synchronous grammars. Synchronous treeadjoining grammars (STAGs), synchronous tree substitution grammars (STSGs), and synchronous context free grammars (SCFGs) are popular formalisms for representing paraphrase rules (Dras, 1997; Cohn and Lapata, 2007; Madnani, 2010; . We present two measures that evaluate these paraphrase grammars using gold standard sentential paraphrases drawn from a monolingual parallel corpus, which have been previously proposed as a good resource for paraphrase evaluation .",
"cite_spans": [
{
"start": 399,
"end": 411,
"text": "(Dras, 1997;",
"ref_id": "BIBREF15"
},
{
"start": 412,
"end": 434,
"text": "Cohn and Lapata, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 435,
"end": 449,
"text": "Madnani, 2010;",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first of our two proposed metrics calculates how often a paraphrase grammar is able to synchronously parse the sentence pairs in a test set. The second measure enumerates paraphrase rules from a monolingual parallel corpus and calculates the overlap between this reference paraphrase collection, and the paraphrase resource being evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most closely related work is ParaMetric , which is a set of objective measures for evaluating the quality of phrase-based paraphrases. ParaMetric extracts a set of gold-standard phrasal paraphrases from sentential paraphrases that have been manually wordaligned. The sentential paraphrases used in Para-Metric were drawn from a data set originally created to evaluate machine translation output using the BLEU metric. argue that these sorts of monolingual parallel corpora are appropriate for evaluating paraphrase systems, because they are naturally occurring sources of paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and background",
"sec_num": "2"
},
{
"text": "Callison-Burch et al. (2008) calculated three types of metrics in ParaMetric. The manual word alignments were used to calculate how well an automatic paraphrasing technique is able to align the paraphrases in a sentence pair. This measure is limited to a class of paraphrasing techniques that perform alignment (like MacCartney et al. (2008) ). Most methods produce a list of paraphrases for a given input phrase. So calculate two more generally applicable measures by comparing the paraphrases in an automatically extracted resource to gold standard paraphrases extracted via the alignments. These allow a lower-bound on precision and relative recall to be calculated. Liu et al. (2010) introduce the PEM metric as an alternative to BLEU, since BLEU prefers identical paraphrases. PEM uses a second language as a pivot to judge semantic equivalence. This requires use of some bilingual data. Chen and Dolan (2011) suggest using BLEU together with their metric PINC, which uses n-grams to measure lexical difference between paraphrases.",
"cite_spans": [
{
"start": 311,
"end": 341,
"text": "(like MacCartney et al. (2008)",
"ref_id": null
},
{
"start": 670,
"end": 687,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and background",
"sec_num": "2"
},
{
"text": "PARADIGM extends the ideas in ParaMetric from lexical and phrasal paraphrasing techniques to paraphrasing techniques that also generate syntactic templates, such as Zhao et al. (2008) , Cohn and Lapata (2009) , Madnani (2010) and . Instead of extracting gold standard paraphrases using techniques from phrasebased machine translation, we use grammar extraction techniques (Weese et al., 2011) to extract gold standard paraphrase grammar rules from ParaMetric's word-aligned sentential paraphrases. Using these rules, we calculate the overlap between a gold standard paraphrase grammar and an automatically generated paraphrase grammar.",
"cite_spans": [
{
"start": 165,
"end": 183,
"text": "Zhao et al. (2008)",
"ref_id": "BIBREF46"
},
{
"start": 186,
"end": 208,
"text": "Cohn and Lapata (2009)",
"ref_id": "BIBREF12"
},
{
"start": 211,
"end": 225,
"text": "Madnani (2010)",
"ref_id": "BIBREF34"
},
{
"start": 372,
"end": 392,
"text": "(Weese et al., 2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and background",
"sec_num": "2"
},
{
"text": "Moreover, like ParaMetric, PARADIGM is able to do further analysis on a restricted class of paraphrasing models. In this case, PARADIGM evaluates how well certain models are able to produce synchronous parses of sentence pairs drawn from monolingual parallel corpora. PARADIGM's different metrics are explained in Section 4, but first we give background on synchronous parsing and synchronous grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work and background",
"sec_num": "2"
},
{
"text": "Synchronous context-free grammars An SCFG (Lewis and Stearns, 1968; Aho and Ullman, 1972 ) is similar to a context-free grammar, except that it generates pairs of strings in correspondence. Each production rule in an SCFG rewrites a non-terminal symbol as a pair of phrases, which may have contain a mix of words and non-terminals symbols. The grammar is synchronous because both phrases in the pair must have an identical set of non-terminals (though they can come in different orders), and corresponding non-terminals must be rewritten using the same rule.",
"cite_spans": [
{
"start": 42,
"end": 67,
"text": "(Lewis and Stearns, 1968;",
"ref_id": "BIBREF27"
},
{
"start": 68,
"end": 88,
"text": "Aho and Ullman, 1972",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous parsing with SCFGs",
"sec_num": "2.1"
},
{
"text": "Much recent work in MT (and, by extension, paraphrasing approaches that use MT machinery) has been focused on choosing an appropriate set of non-terminal symbols. The Hiero model (Chiang, 2007) used a single non-terminal symbol X. Other approaches have read symbols from constituent parses of the training data (Galley et al., 2004; Galley et al., 2006; Zollmann and Venugopal, 2006) . Labels based combinatory categorial grammar (Steedman and Baldridge, 2011) have also been used (Almaghout et al., 2010; . Wu (1997) that the average parse time can be significantly improved by using a two-pass algorithm.",
"cite_spans": [
{
"start": 179,
"end": 193,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF10"
},
{
"start": 311,
"end": 332,
"text": "(Galley et al., 2004;",
"ref_id": "BIBREF17"
},
{
"start": 333,
"end": 353,
"text": "Galley et al., 2006;",
"ref_id": "BIBREF18"
},
{
"start": 354,
"end": 383,
"text": "Zollmann and Venugopal, 2006)",
"ref_id": "BIBREF49"
},
{
"start": 430,
"end": 460,
"text": "(Steedman and Baldridge, 2011)",
"ref_id": "BIBREF41"
},
{
"start": 481,
"end": 505,
"text": "(Almaghout et al., 2010;",
"ref_id": "BIBREF3"
},
{
"start": 508,
"end": 517,
"text": "Wu (1997)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous parsing with SCFGs",
"sec_num": "2.1"
},
{
"text": "The question of whether a source-reference pair is reachable under a model must be addressed in end-to-end discriminative training in MT (Liang et al., 2006a; Gimpel and Smith, 2012) . Auli et al. (2009) showed that only approximately 30% of training pairs are reachable under a phrase-based model. This result is confirmed by our results in paraphrasing.",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(Liang et al., 2006a;",
"ref_id": "BIBREF28"
},
{
"start": 159,
"end": 182,
"text": "Gimpel and Smith, 2012)",
"ref_id": "BIBREF22"
},
{
"start": 185,
"end": 203,
"text": "Auli et al. (2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous parsing",
"sec_num": null
},
{
"text": "Like ParaMetric, PARADIGM extracts gold standard paraphrases from word-aligned sentential paraphrases. PARADIGM goes further by parsing one of the two input sentences, and uses the parse tree to extract syntactic paraphrase rules, following recent advances in syntactic approaches to machine translation (like Galley et al. (2004) , Zollmann and Venugopal (2006) , and others). Figure 1 shows an example of a parsed sentence pair. From that pair it is possible to extract a wide variety of non-identical paraphrases, which include lexical paraphrases (single word synonyms), phrasal paraphrases, and syntactic paraphrases that include a mix of words and syntactic non-terminal CC \u2192 and while VBP \u2192 want propose VBP \u2192 expect want DT \u2192 some some people S \u2192 him to step down him to resign VP \u2192 step down resign VP \u2192 to step down to resign VP \u2192 want to impeach him propose to impeach him VP \u2192 want VP propose VP VP \u2192 want to impeach PRP propose to impeach PRP VP \u2192 VBP him to step down VBP him to resign S \u2192 PRP to step down PRP to resign Figure 2 : Four examples each of lexical, phrasal, and syntactic paraphrases that can be extracted from the sentence pair in Figure 1 . symbols. Figure 2 shows a set of four examples for each type that can be extracted from Figure 1 . These rules are formulated as SCFG rules, with a syntactic left-hand nonterminal symbol and two English right-hand sides representing the paraphrase. The examples above include nonterminal symbols that represent whole syntactic constituents. It is also possible to create more complex non-terminal symbols that describe CCG-like non-constituent phrases. For example, we could extract a rule like S/VP \u2192 <NNS want him to, NNS expect him to> Using constituents only, we are able to extract 45 paraphrase rules from Figure 1 . Adding CCG-style slashed constituents yields 66 additional rules.",
"cite_spans": [
{
"start": 310,
"end": 330,
"text": "Galley et al. (2004)",
"ref_id": "BIBREF17"
},
{
"start": 333,
"end": 362,
"text": "Zollmann and Venugopal (2006)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [
{
"start": 378,
"end": 386,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1035,
"end": 1043,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1160,
"end": 1168,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1180,
"end": 1188,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1259,
"end": 1267,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1783,
"end": 1791,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Paraphrase grammar extraction",
"sec_num": "3"
},
{
"text": "By considering a paraphrase model as a synchronous context-free grammar, we propose to measure the model's goodness using the following criteria:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARADIGM: Evaluating paraphrase grammars",
"sec_num": "4"
},
{
"text": "1. What percentage of sentential paraphrases are reachable under the model? That is, given a collection of sentence pairs (a i , b i ) and an SCFG G, where each pair of a and b are sentential paraphrases, how many of the pairs are in the language of G? We evaluate this by producing a synchronous parse for the pairs, as shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "PARADIGM: Evaluating paraphrase grammars",
"sec_num": "4"
},
{
"text": "2. Given a collection of gold-standard paraphrase rules, how many of those paraphrases exist as rules in G? To calculate this, we look at the overlap of grammars (described in Figure 3 : We measure the goodness of paraphrase grammars by determine how often they can be used to synchronously parse gold-standard sentential paraphrases. Note we do not require the synchronous derivation to match a gold-standard parse tree. Section 4.2 below), examining different categories of rules and thresholding based on how frequently the rule was used in the gold standard data.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "PARADIGM: Evaluating paraphrase grammars",
"sec_num": "4"
},
{
"text": "These criteria correspond to properties that we think are desirable in paraphrase models. They also have the advantage that they do not depend on human judgments and so can be calculated automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PARADIGM: Evaluating paraphrase grammars",
"sec_num": "4"
},
{
"text": "Paraphrase grammars should be able to explain sentential paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous parse coverage",
"sec_num": "4.1"
},
{
"text": "For example, Figure 3 shows a sentence pair that is synchronously parseable by one paraphrase grammar. In general, we say that the more such sentence pairs that a paraphrase grammar can synchronously parse, the better it is.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 22,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous parse coverage",
"sec_num": "4.1"
},
{
"text": "The synchronous derivation allows us to draw inferences about parts of the sentence pair that are in correspondence; for instance, in Figure 3 , violent unrest corresponds to riots and mohammad corresponds to the islamic prophet.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous parse coverage",
"sec_num": "4.1"
},
{
"text": "We measure grammar overlap by comparing the sets of production rules for two different grammars. If the grammars contain rules that are equivalent, the equivalent rules are in the grammars' overlap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "We consider two types of overlapping, which we will call strict and non-strict overlap. For strict overlap, we say that two rules are equivalent if they are identical, that is, if they have the same left-hand side non-terminal symbol, their source sides are identical strings, and their target sides are identical strings. (This includes identical indexing on non-terminal symbols on the right hand sides of the rule.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "To calculate non-strict overlap, we ignore the identities of non-terminal symbols in the left-hand and right-hand sides of the rules. That is, two rules are considered equivalent if they are identical after all the non-terminal symbols have been replaced by one equivalent symbol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "For example, in non-strict overlap, the syntactic rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "N P \u2192 N 1 's N 2 ; the N 2 of N 1 would match the Hiero rule X \u2192 X 1 's X 2 ; the X 2 of X 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "If we are considering two Hiero grammars, strict and non-strict intersection are the same operation since they only have on non-terminal X.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar overlap defined",
"sec_num": "4.2"
},
{
"text": "Callison-Burch et al. 2008use the notion of overlap between two paraphrase sets to define two metrics, precision lower bound and relative recall. These are calculated the same way as standard precision and recall. Relative recall is qualified as \"relative\" because it is calculated on a potentially incomplete set of gold standard paraphrases. There may exist valid paraphrases that do not occur in that set. Similarly, only a lower bound on precision can be calculated because the candidate set may contain valid paraphrases that do not occur in the gold standard set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Precision lower bound and relative recall",
"sec_num": "4.3"
},
{
"text": "We extracted paraphrase grammars from a variety of different data sources, including four collections of sentential paraphrases. These included:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Multiple translation corpora that were compiled by the Linguistics Data Consortium (LDC) for the purposes of evaluating machine translation quality with the BLEU metric. We collected eight LDC corpora that all have multiple English Barzilay and McKeown (2001) . MSR data is from and . ParaMertic data is from .",
"cite_spans": [
{
"start": 234,
"end": 261,
"text": "Barzilay and McKeown (2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 Classic French Literature that were translated by different translators, and which were compiled by Barzilay and McKeown (2001) .",
"cite_spans": [
{
"start": 102,
"end": 129,
"text": "Barzilay and McKeown (2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 The MSR Paraphrase corpus which consists of sentence pairs drawn from comparable news articles drawn from different web sites in the same date rate. The sentence pairs were aligned heuristically aligned and then manually judged to be paraphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "\u2022 The ParaMetric data which consists of 900 manually word-aligned sentence pairs collected by . 300 sentence pairs were drawn from each of the 3 above sources. We use this to extract the gold standard paraphrase grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "The size of the data from each source is summarized in Table 1 . For each dataset, after tokenizing and normalizing, we parsed one sentence in each English pair using the Berkeley constituency parser (Liang et al., 2006b) . We then obtained word-level alignments, either using GIZA++ (Och and Ney, 2000) or, in the case of ParaMetric, using human annotations.",
"cite_spans": [
{
"start": 200,
"end": 221,
"text": "(Liang et al., 2006b)",
"ref_id": "BIBREF29"
},
{
"start": 284,
"end": 303,
"text": "(Och and Ney, 2000)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We used the Thrax grammar extractor (Weese et al., 2011) to extract Hiero-style and syntactic SCFGs from the paraphrase data. In the syntactic setting we allowed labeling of rules with either constituent labels or CCG-style slashed categories. The size of the extracted grammars is shown in Table 2 .",
"cite_spans": [
{
"start": 36,
"end": 56,
"text": "(Weese et al., 2011)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 291,
"end": 298,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We also used version 0.2 of the SCFG-based paraphrase collection known as the ParaPhrase DataBase or PPDB . The PPDB paraphrases were extracted using the pivoting technique ( 2005) on bilingual parallel corpora containing over 42 million sentence pairs. The PPDB release includes a tool for pruning the grammar to a smaller size by retaining only high-precision paraphrases. We include PPDB grammars for several different pruning settings in our analysis.",
"cite_spans": [
{
"start": 173,
"end": 174,
"text": "(",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.1"
},
{
"text": "We calculated our two metrics for each of the grammars listed in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 65,
"end": 72,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "To perform synchronous parsing, we used the Joshua decoder (Post et al., 2013) , which includes an implementation of Dyer's two-pass parsing algorithm (2010). After splitting the LDC data into 10 equal pieces, we trained paraphrase models on nine-tenths of the data and parsed the other tenth.",
"cite_spans": [
{
"start": 59,
"end": 78,
"text": "(Post et al., 2013)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Grammars trained from other sources (the MSR corpus, French literature domain, and PPDB) were also evaluated on the held-out tenth of LDC data. Note that the LDC data contains 4 independent translations of each foreign sentence, giving 6 possible (unordered) paraphrase pairs. We evaluated coverage in two ways (corresponding to the two columns in Table 6 ): first, considering all possible sentence pairs from the test data, how many were able to be parsed?",
"cite_spans": [],
"ref_spans": [
{
"start": 348,
"end": 355,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Secondly, if we consider all the English sentences that correspond to one foreign sentence, how many foreign sentences had at least one pair of English translations that could be parsed synchronously?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "For grammar overlap, we perform both strict and non-strict calculations (see Section 4.2) against a syntactic grammar derived from handaligned ParaMetric data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "In Table 5 we see a breakdown of the types of paraphrases in the overlap for three of the models. Although the PPDB-xl overlap is much larger than the other two, about 80% of its rules are syntactic transformations. The LDC and MSR models have a much larger proportion of phrasal and lexical rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Grammar overlap results",
"sec_num": "5.3"
},
{
"text": "Next we will look at the grammar overlap num- bers presented in Table 3 and Table 4 . Note the non-intuitive result that for some grammars (notably PPDB), the non-strict overlap is smaller than the strict overlap. This is because rules with different non-terminals only count once in the non-strict overlap; for example, in PPDBsmall, NN \u2192 answer ; reply VB \u2192 answer ; reply count as separate entries when calculating strictly, but when ignoring non-terminals, they count as only one type of rule.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 83,
"text": "Table 3 and Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Grammar overlap results",
"sec_num": "5.3"
},
{
"text": "The fact that the non-strict overlaps are smaller means that there must be many rules in PPDB that are identical except for non-terminal labels. Figure 4 shows relative recall and precision lower bound calculated for various sizes of PPDB relative to the ParaMetric grammar. The x-axis represents the size of the grammar as we vary from keeping only the most probable rules to including less probable ones. Restricting to high probability rules makes the grammar much smaller, resulting in higher precision. Table 6 shows the percentage of sentence pairs that were reachable in a held-out portion of the LDC multiple-translation data.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 153,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 508,
"end": 515,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammar overlap results",
"sec_num": "5.3"
},
{
"text": "We find that a grammar trained on LDC data vastly outperforms data from any other domain. This is not surprising -we shouldn't expect a model trained on French literature to be able to Grammar % (all) % (any) LDC Hiero 9.5 33.0 Lit. Hiero 1.8 9.6 MSR Hiero 1.7 9.2 LDC Syntax 9.1 30.2 Lit. Syntax 2.0 10.7 MSR Syntax 1.9 10.4 PM Syntax 1.7 9.8 PPDB-v0.2-small 1.8 3.3 PPDB-v0.2-large 2.5 4.5 PPDB-v0.2-xl 3.5 6.2 Table 6 : Parse coverage on held-out LDC data. The all column considers every possible sentential paraphrase in the test set. The any column considers a sentence parsed if any of its paraphrases was able to parsed.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous parsing results",
"sec_num": "5.5"
},
{
"text": "handle some of the vocabulary found in news stories that were originally in Arabic or Chinese. The PPDB data outperforms both French literature and MSR models if we look all possible sentence pairs from test data (the column labeled \"all\" in the table). However, when we consider whether any pair from a set of 4 translations can be translated, the PPDB models do not do as well. This implies that PPDB tends to be able to reach many pairs from the same set of translations, but there are many translations that it cannot handle at all. By contrast, the literature-and MSR-trained models can reach at least one pair from 10% of the test examples, even though the absolute number of pairs they can reach is lower. Table 2 shows that the PPDB-derived grammars are much larger than the syntactic models derived from other domains. It may seem surprising that they should perform worse, but adding more rules to the grammar just by varying non-terminal labels isn't likely to help overall parse coverage. This suggests a new pruning method: keep only the top k label variations for each rule type. If we compare the syntactic models to the Hiero models trained from the same data, we see that their overall reachability performance is not very different. This implies that paraphrases can be annotated with linguistic information without necessarily hurting their ability to explain particular sentence pairs. Contrast this result, with, for example, those of Koehn et al. (2003) , showing that restricting translation models to only syntactic phrases hurts overall translation performance. The comparable performance between Hiero and syntactic models seems to hold regardless of domain.",
"cite_spans": [
{
"start": 1456,
"end": 1475,
"text": "Koehn et al. (2003)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 713,
"end": 720,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Synchronous parsing results",
"sec_num": "5.5"
},
{
"text": "To validate PARADIGM, we calculated its correlation with human judgments of paraphrase quality on the sentence compression text-to-text generation task, which has been used to evaluate paraphrase grammars in previous research (Cohn and Lapata, 2007; Zhao et al., 2009; . We created sentence compression systems for five of the paraphrase grammars described in Section 5.1. We followed the methodology outlined by and did the following:",
"cite_spans": [
{
"start": 226,
"end": 249,
"text": "(Cohn and Lapata, 2007;",
"ref_id": "BIBREF11"
},
{
"start": 250,
"end": 268,
"text": "Zhao et al., 2009;",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "\u2022 Each paraphrase grammar was augmented with an appropriate set of rule-level features that capture information pertinent to the task. In this case, the paraphrase rules were given two additional features that shows how the number of words and characters changed after applying the rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "\u2022 Similarly to how the weights of the models are set using minimum error rate training in statistical machine translation, the weights for each of the paraphrase grammars using the PRO tuning method (Hopkins and May, 2011) .",
"cite_spans": [
{
"start": 199,
"end": 222,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "\u2022 Instead of optimizing to the BLEU metric, as is done in machine translation, we optimized to PR\u00c9CIS, a metric developed for sentence compression that adapts BLEU so that it includes a \"verbosity penalty\" to encourage the compression systems to produce shorter output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "\u2022 We created a development set with sentence compressions by selecting 1000 pairs of sentences from the multiple translation corpus where two English translations of the same foreign sentences differed in each other by a length ratio of 0.67-0.75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "\u2022 We decoded a test set of 1000 sentences using each of the grammars and its optimized weights with the Joshua decoder (Ganitkevitch et al., 2012) . The selected in the same fashion as the dev sentences, so each one had a human-created reference compression.",
"cite_spans": [
{
"start": 119,
"end": 146,
"text": "(Ganitkevitch et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "We conducted a human evaluation to judge the meaning and grammaticality of the sentence compressions derived from each paraphrase grammar. We presented workers on Mechanical Turk with the input sentence to the compression sentence (the long sentence), along with 5 shortened outputs from our compression systems. To ensure that workers were producing reliable judgments we also presented them with a positive control (a reference compression written by a person) and a negative controls (a compressed output that was generated by randomly deleted words). We excluded judgments from workers who did not perform well on the positive and negative controls.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "Meaning and grammaticality were scored on 5-point scales where 5 is best. These human scores were averaged over 2000 judgments (1000 sentences x 2 annotators) for each system. The systems' outputs were then scored with BLEU, PR\u00c9CIS, and their paraphrase grammars were scored PARADIGM's relative recall and precision lower-bound estimates. For each grammar, we also calculated the average length of parseable sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "We calculated the correlation between the human judgements and the automatic scores, using Spearman's rank correlation coefficient \u03c1. This is methodology is the same that is used to quantify the goodness of automatic evaluation metrics in the machine translation literature (Przybocki et al., 2008; Callison-Burch et al., 2010) . The possible values of \u03c1 range between 1 (where all systems are ranked in the same order) and \u22121 (where the systems are ranked in the reverse order). Thus an automatic evaluation metric with a higher absolute value for \u03c1 is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower absolute \u03c1. Table 7 shows that our PARADIGM scores correlate more highly with human judgments than either BLEU or PR\u00c9CIS for the 5 systems in our evaluation. This suggests that it may be a better predictor of the goodness of paraphrase grammars than MT metrics, when the paraphrase grammars are used for text-to-text generation tasks. Table 7 : The correlation (Spearman's \u03c1) of different automatic evaluation metrics with human judgments of paraphrase quality for the text-totext generation task of sentence compression.",
"cite_spans": [
{
"start": 274,
"end": 298,
"text": "(Przybocki et al., 2008;",
"ref_id": "BIBREF38"
},
{
"start": 299,
"end": 327,
"text": "Callison-Burch et al., 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 679,
"end": 686,
"text": "Table 7",
"ref_id": null
},
{
"start": 1002,
"end": 1009,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Correlation with human judgments",
"sec_num": "6"
},
{
"text": "We have introduced two new metrics for evaluating paraphrase grammars, and looked at several models from a variety of domains. Using these metrics we can perform a variety of analyses about SCFG-based paraphrase models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "\u2022 Automatically-extracted grammars can parse a small fraction of held-out data (\u226430%). This is comparable to results in MT (Auli et al., 2009) .",
"cite_spans": [
{
"start": 123,
"end": 142,
"text": "(Auli et al., 2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "\u2022 In-domain training data is necessary in order to parse held-out data. A model trained on newswire data parsed 30% of held-out newswire sentence pairs, versus to <10% for literature or parliamentary data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "\u2022 SCFGs with syntactic labels perform just as well as simpler models with a single nonterminal label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "\u2022 Automatically-extracted syntactic grammars tend to have a reasonable overlap with grammars derived from human-aligned data, including more 45% of the gold-standard grammar's paraphrase rules that occurred at least twice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "\u2022 We showed that PARADIGM more strongly correlates with human judgments of the meaning and grammaticality of paraphrases produced by sentence compression systems than standard automatic evaluation measures like BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "PARADIGM will help researchers developing paraphrase resources to perform similar diagnostics on their models, and quickly evaluate their systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "7"
},
{
"text": "LDC Catalog numbers LDC2002T01, LDC2005T05, LDC2010T10, LDC2010T11, LDC2010T12, LDC2010T14, LDC2010T17, and LDC2010T23.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This material is based on research sponsored by the NSF under grant IIS-1249516 and DARPA under agreement number FA8750-13-2-0017 (the DEFT program). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "7%) 5,055 (24.5%) Lit",
"authors": [],
"year": null,
"venue": "LDC Syntax",
"volume": "58",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MSR Hiero 58,970 (29.4%) 6,741 (32.6%) LDC Syntax 37,231 (11.7%) 5,055 (24.5%) Lit. Syntax 19,530 (9.7%) 3,121 (15.1%) MSR Syntax 28,016 (14.0%) 3,564 (17.2%)",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Theory of Parsing, Translation, and Compiling",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred V. Aho and Jeffrey D. Ullman. 1972. The The- ory of Parsing, Translation, and Compiling. Pren- tice Hall.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CCG augmented hierarchical phrase-based machine translation",
"authors": [
{
"first": "Hala",
"middle": [],
"last": "Almaghout",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Andy",
"middle": [
"Way"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of IWSLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hala Almaghout, Jie Jiang, and Andy Way. 2010. CCG augmented hierarchical phrase-based machine translation. In Proc. of IWSLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A systematic analysis of translation model search spaces",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Auli, Adam Lopez, Hieu Hoang, and Philipp Koehn. 2009. A systematic analysis of translation model search spaces. In Proc. WMT.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [
"R"
],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R. McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proc. of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "ParaMetric: An automatic evaluation metric for paraphrasing",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Trevor Cohn, and Mirella Lap- ata. 2008. ParaMetric: An automatic evaluation metric for paraphrasing. In Proc. of COLING.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Kay",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "Omar",
"middle": [
"F"
],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation (WMT10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zaidan. 2010. Findings of the 2010 joint workshop on sta- tistical machine translation and metrics for machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation (WMT10).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Collecting highly parallel data for paraphrase evaluation",
"authors": [
{
"first": "L",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David L. Chen and William Dolan. 2011. Collect- ing highly parallel data for paraphrase evaluation. In Proc. of ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Large margin synchronous generation and its application to sentence compression",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoLing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Mirella Lapata. 2007. Large mar- gin synchronous generation and its application to sentence compression. In Proceedings of EMNLP- CoLing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentence compression as tree transduction",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of Artificial Intelligence Research (JAIR)",
"volume": "34",
"issue": "",
"pages": "637--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Mirella Lapata. 2009. Sentence com- pression as tree transduction. Journal of Artificial Intelligence Research (JAIR), 34:637-674.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Constructing corpora for the development and evaluation of paraphrase systems",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn, Chris Callison-Burch, and Mirella Lap- ata. 2008. Constructing corpora for the develop- ment and evaluation of paraphrase systems. Com- putational Linguistics, 34(4).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unsupervised construction of large paraphrases corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrases cor- pora: Exploiting massively parallel news sources. In Proc. of COLING.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Representing paraphrases using synchronous tree adjoining grammars",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dras",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "516--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dras. 1997. Representing paraphrases using synchronous tree adjoining grammars. In Proceed- ings of the 35th Annual Meeting of the Associa- tion for Computational Linguistics, pages 516-518, Madrid, Spain, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Two monolingual parses are better than one (synchronous parse)",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2010,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "263--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer. 2010. Two monolingual parses are bet- ter than one (synchronous parse). In Proceedings of HLT/NAACL, pages 263-266. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "What's in a translation rule",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "273--280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In HLT-NAACL 2004: Main Proceedings, pages 273-280.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Scalable inference and training of context-rich syntactic translation models",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Graehl",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Deneefe",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve Deneefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of ACL, pages 961-968.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Chris Callison-Burch, Courtney Napoles, and Benjamin Van Durme. 2011. Learn- ing sentential paraphrases from bilingual parallel corpora for text-to-text generation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joshua 4.0: Packing, pro, and paraphrases",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "283--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Yuan Cao, Jonathan Weese, Matt Post, and Chris Callison-Burch. 2012. Joshua 4.0: Packing, pro, and paraphrases. In Proceedings of the Seventh Workshop on Statistical Machine Trans- lation, pages 283-291, Montr\u00e9al, Canada, June. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "PPDB: The paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proc. NAACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Structured ramp loss minimization for machine translation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Structured ramp loss minimization for machine translation. In Proc. of NAACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Tuning as ranking",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Process- ing, pages 1352-1362, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Extracting structural paraphrases from aligned monolingual corpora",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the Second International Workshop on Paraphrasing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Ex- tracting structural paraphrases from aligned mono- lingual corpora. In Proc. of the Second International Workshop on Paraphrasing.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Paraphrasing for automatic evaluation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kauchak and Regina Barzilay. 2006. Para- phrasing for automatic evaluation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 48-54, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Syntax-directed transduction",
"authors": [
{
"first": "M",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"E"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stearns",
"suffix": ""
}
],
"year": 1968,
"venue": "Journal of the ACM",
"volume": "15",
"issue": "3",
"pages": "465--488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip M. Lewis and Richard E. Stearns. 1968. Syntax-directed transduction. Journal of the ACM, 15(3):465-488.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An end-to-end discriminative approach to machine translation",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Bouchard-C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Alexandre Bouchard-C\u00f4t\u00e9, Dan Klein, and Ben Taskar. 2006a. An end-to-end discrimi- native approach to machine translation. In Proc. of ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Alignment by agreement",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "104--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang, Ben Taskar, and Dan Klein. 2006b. Alignment by agreement. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 104-111, New York City, USA, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Discovery of inference rules from text",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2001,
"venue": "Natural Language Engineering",
"volume": "7",
"issue": "3",
"pages": "343--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules from text. Natural Language Engi- neering, 7(3):343-360.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "PEM: a paraphrase evaluation metric exploiting parallel texts",
"authors": [
{
"first": "Chang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dahlmeier",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. PEM: a paraphrase evaluation metric exploit- ing parallel texts. In Proc. of EMNLP.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A phrase-based alignment model for natural language inference",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Maccartney",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "802--811",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill MacCartney, Michel Galley, and Christopher D. Manning. 2008. A phrase-based alignment model for natural language inference. In Proceedings of the 2008 Conference on Empirical Methods in Nat- ural Language Processing, pages 802-811, Hon- olulu, Hawaii, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Generating phrasal and sentential paraphrases: A survey of data-driven methods",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2010,
"venue": "Computational Linguistics",
"volume": "36",
"issue": "3",
"pages": "341--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani and Bonnie Dorr. 2010. Generat- ing phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341-388.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "The Circle of Meaning: From Translation to Paraphrasing and Back",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani. 2010. The Circle of Meaning: From Translation to Paraphrasing and Back. Ph.D. the- sis, Department of Computer Science, University of Maryland College Park.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Evaluating sentence compression: Pitfalls and suggested remedies",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Monolingual Text-To-Text Generation",
"volume": "",
"issue": "",
"pages": "91--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Courtney Napoles, Benjamin Van Durme, and Chris Callison-Burch. 2011. Evaluating sentence com- pression: Pitfalls and suggested remedies. In Pro- ceedings of the Workshop on Monolingual Text-To- Text Generation, pages 91-97, Portland, Oregon, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Improved statistical alignment models",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Och and Hermann Ney. 2000. Improved sta- tistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Com- putational Linguistics, pages 440-447, Hong Kong, China, October.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Joshua 5.0: Sparser, better, faster, server",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Orland",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post, Juri Ganitkevitch, Luke Orland, Jonathan Weese, Yuan Cao, and Chris Callison-Burch. 2013. Joshua 5.0: Sparser, better, faster, server. In Proc. of WMT.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Metrics for MAchine TRanslation\" challenge (Metrics-MATR08)",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Przybocki",
"suffix": ""
},
{
"first": "Kay",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Bronsart",
"suffix": ""
}
],
"year": 2008,
"venue": "AMTA-2008 workshop on Metrics for Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Przybocki, Kay Peterson, and Sebastian Bron- sart. 2008. Official results of the NIST 2008 \"Met- rics for MAchine TRanslation\" challenge (Metrics- MATR08). In AMTA-2008 workshop on Metrics for Machine Translation.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Monlingual machine translation for paraphrase generation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monlingual machine translation for paraphrase gen- eration. In Proc. of EMNLP.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Ter-plus: paraphrase, semantic, and alignment enhancements to translation edit rate",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": 2010,
"venue": "Machine Translation",
"volume": "23",
"issue": "2-3",
"pages": "117--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2010. Ter-plus: paraphrase, se- mantic, and alignment enhancements to translation edit rate. Machine Translation, 23(2-3):117-127.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Combinatory categorial grammar",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "Non-Transformational Syntax",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman and Jason Baldridge. 2011. Combi- natory categorial grammar. In Robert Borsley and Kersti B\u00f6rjars, editors, Non-Transformational Syn- tax. Wiley-Blackwell.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Instance-based evaluation of entailment rule acquisition",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Eyal",
"middle": [],
"last": "Shnarch",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acqui- sition. In Proc. of ACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Joshua 3.0: Syntax-based machine translation with the thrax grammar extractor",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "478--484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Weese, Juri Ganitkevitch, Chris Callison- Burch, Matt Post, and Adam Lopez. 2011. Joshua 3.0: Syntax-based machine translation with the thrax grammar extractor. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 478-484, Edinburgh, Scotland, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Using categorial grammar to label translation rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Weese",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Weese, Chris Callison-Burch, and Adam Lopez. 2012. Using categorial grammar to label translation rules. In Proc. of WMT.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Pivot approach for extracting paraphrase patterns from bilingual corpora",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2008. Pivot approach for extracting paraphrase pat- terns from bilingual corpora. In Proceedings of ACL/HLT.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Application-driven statistical paraphrase generation",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Xiang Lan, Ting Liu, and Sheng Li. 2009. Application-driven statistical paraphrase generation. In Proceedings of ACL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Paraeval: Using paraphrases to evaluate summaries automatically",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Dragos",
"middle": [],
"last": "Stefan Munteanu",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT/NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Zhou, Chin-Yew Lin, Dragos Stefan Munteanu, and Eduard Hovy. 2006. Paraeval: Using para- phrases to evaluate summaries automatically. In Proceedings of HLT/NAACL.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Syntax augmented machine translation via chart parsing",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Zollmann",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Venugopal",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings on the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "138--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Zollmann and Ashish Venugopal. 2006. Syn- tax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Ma- chine Translation, pages 138-141, New York City, June. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "introduced a parsing algorithm using a variant of CKY. Dyer recently showed (2010) PARADIGM extracts lexical, phrasal and syntactic paraphrases from parsed, word-aligned sentence pairs.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Precision lower bound and relative recall when overlapping different sizes of PPDB with the syntactic ParaMetric grammar.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Amount of English-English parallel data. LDC data has 4 parallel translations per sentence.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Size of various paraphrase grammars.",
"html": null,
"content": "<table><tr><td>Grammar</td><td>freq. \u2265 1</td><td>freq. \u2265 2</td></tr><tr><td>ParaMetric Syntax</td><td>317,772</td><td>21,709</td></tr><tr><td>LDC Hiero</td><td>5,840 (1.8%)</td><td>416 (1.9%)</td></tr><tr><td>Lit. Hiero</td><td>6,152 (1.9%)</td><td>359 (1.7%)</td></tr><tr><td>MSR Hiero</td><td>10,012 (3.2%)</td><td>315 (1.5%)</td></tr><tr><td>LDC Syntax</td><td>48,833 (15.3%)</td><td>7,748 (35.6%)</td></tr><tr><td>Lit. Syntax</td><td>14,431 (4.5%)</td><td>1,960 (9.0%)</td></tr><tr><td>MSR Syntax</td><td>21,197 (6.7%)</td><td>2,053 (9.5%)</td></tr><tr><td>PPDB-v0.2-small</td><td>15,831 (5.0%)</td><td>5,673 (26.1%)</td></tr><tr><td>PPDB-v0.2-large</td><td>31,277 (9.8%)</td><td>8,245 (37.9%)</td></tr><tr><td>PPDB-v0.2-xl</td><td colspan=\"2\">47,720 (15.0%) 10,049 (46.2%)</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "Size of strict overlap (number of rules and % of the gold standard) of each grammar with a syntactic grammar derived from ParaMetric. freq. \u2265 2 means we first removed all rules that appeared only once from the ParaMetric grammar. The number in parentheses shows the percentage of ParaMetric rules that are present in the overlap.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Size of non-strict overlap of each grammar with the syntactic grammar derived from ParaMetric. The number in parentheses shows the percentage of ParaMetric rules that are present in the overlap.",
"html": null,
"content": "<table><tr><td>Grammar</td><td>syntactic</td><td>phrasal</td><td>lexical</td></tr><tr><td>ParaMetric</td><td>238,646</td><td>73,320</td><td>5,806</td></tr><tr><td>LDCSyn</td><td>36,375 (15%)</td><td colspan=\"2\">8,806 (12%) 3,652 (62%)</td></tr><tr><td>MSRSyn</td><td colspan=\"3\">7,734 (3%) 11,254 (15%) 2,209 (38%)</td></tr><tr><td>PPDB-xl</td><td>40,822 (17%)</td><td colspan=\"2\">3,765 (5%) 3,142 (54%)</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF6": {
"text": "",
"html": null,
"content": "<table><tr><td>: Number of paraphrases of each type</td></tr><tr><td>in each grammar's strict overlap with the syntac-</td></tr><tr><td>tic ParaMetric grammar. Numbers in parentheses</td></tr><tr><td>show the percentage of ParaMetric rules of each</td></tr><tr><td>type.</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}