|
{ |
|
"paper_id": "Q14-1018", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:11:14.399395Z" |
|
}, |
|
"title": "Back to Basics for Monolingual Alignment: Exploiting Word Similarity and Contextual Evidence", |
|
"authors": [ |
|
{ |
|
"first": "Md", |
|
"middle": [], |
|
"last": "Arafat Sultan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Alabama at Birmingham", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bethard", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Alabama at Birmingham", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Tamara", |
|
"middle": [], |
|
"last": "Sumner", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Alabama at Birmingham", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present a simple, easy-to-replicate monolingual aligner that demonstrates state-of-the-art performance while relying on almost no supervision and a very small number of external resources. Based on the hypothesis that words with similar meanings represent potential pairs for alignment if located in similar contexts, we propose a system that operates by finding such pairs. In two intrinsic evaluations on alignment test data, our system achieves F 1 scores of 88-92%, demonstrating 1-3% absolute improvement over the previous best system. Moreover, in two extrinsic evaluations our aligner outperforms existing aligners, and even a naive application of the aligner approaches state-ofthe-art performance in each extrinsic task.", |
|
"pdf_parse": { |
|
"paper_id": "Q14-1018", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present a simple, easy-to-replicate monolingual aligner that demonstrates state-of-the-art performance while relying on almost no supervision and a very small number of external resources. Based on the hypothesis that words with similar meanings represent potential pairs for alignment if located in similar contexts, we propose a system that operates by finding such pairs. In two intrinsic evaluations on alignment test data, our system achieves F 1 scores of 88-92%, demonstrating 1-3% absolute improvement over the previous best system. Moreover, in two extrinsic evaluations our aligner outperforms existing aligners, and even a naive application of the aligner approaches state-ofthe-art performance in each extrinsic task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Monolingual alignment is the task of discovering and aligning similar semantic units in a pair of sentences expressed in a natural language. Such alignments provide valuable information regarding how and to what extent the two sentences are related. Consequently, alignment is a central component of a number of important tasks involving text comparison: textual entailment recognition, textual similarity identification, paraphrase detection, question answering and text summarization, to name a few.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The high utility of monolingual alignment has spawned significant research on the topic in the recent past. Major efforts that have treated alignment as a standalone problem (MacCartney et al., 2008; Thadani and McKeown, 2011; Yao et al., 2013a) are primarily supervised, thanks to the manually aligned corpus with training and test sets from Microsoft Research (Brockett, 2007) . Primary concerns of such work include both quality and speed, due to the fact that alignment is frequently a component of larger NLP tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 199, |
|
"text": "(MacCartney et al., 2008;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 226, |
|
"text": "Thadani and McKeown, 2011;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 227, |
|
"end": 245, |
|
"text": "Yao et al., 2013a)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 378, |
|
"text": "(Brockett, 2007)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Driven by similar motivations, we seek to devise a lightweight, easy-to-construct aligner that produces high-quality output and is applicable to various end tasks. Amid a variety of problem formulations and ingenious approaches to alignment, we take a step back and examine closely the effectiveness of two frequently made assumptions: 1) Related semantic units in two sentences must be similar or related in their meaning, and 2) Commonalities in their semantic contexts in the respective sentences provide additional evidence of their relatedness (MacCartney et al., 2008; Thadani and McKeown, 2011; Yao et al., 2013a; Yao et al., 2013b) . Alignment, based solely on these two assumptions, reduces to finding the best combination of pairs of similar semantic units in similar contexts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 549, |
|
"end": 574, |
|
"text": "(MacCartney et al., 2008;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 601, |
|
"text": "Thadani and McKeown, 2011;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 602, |
|
"end": 620, |
|
"text": "Yao et al., 2013a;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 621, |
|
"end": 639, |
|
"text": "Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Exploiting existing resources to identify similarity of semantic units, we search for robust techniques to identify contextual commonalities. Dependency trees are a commonly used structure for this purpose. While they remain a central part of our aligner, we expand the horizons of dependency-based alignment beyond exact matching by systematically exploiting the notion of \"type equivalence\" with a small handcrafted set of equivalent dependency types. In addition, we augment dependency-based alignment with surface-level text analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "While phrasal alignments are important and have been investigated in multiple studies, we focus primarily on word alignments (which have been shown to form the vast majority of alignments (\u2265 95%) in multiple human-annotated corpora (Yao et al., 2013b) ), keeping the framework flexible enough to allow incorporation of phrasal alignments in future. Evaluation of our aligner on the benchmark dataset reported in (Brockett, 2007) shows an F 1 score of 91.7%: a 3.1% absolute improvement over the previous best system (Yao et al., 2013a) , corresponding to a 27.2% error reduction. It shows superior performance also on the dataset reported in (Thadani et al., 2012) . Additionally, we present results of two extrinsic evaluations, namely textual similarity identification and paraphrase detection. Our aligner not only outperforms existing aligners in each task, but also approaches top systems for the extrinsic tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 251, |
|
"text": "(Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 428, |
|
"text": "(Brockett, 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 535, |
|
"text": "(Yao et al., 2013a)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 642, |
|
"end": 664, |
|
"text": "(Thadani et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Monolingual alignment has been applied to various NLP tasks including textual entailment recognition (Hickl et al., 2006; Hickl and Bensley, 2007) , paraphrase identification (Das and Smith, 2009; Madnani et al., 2012) , and textual similarity assessment (B\u00e4r et al., 2012; Han et al., 2013 ) -in some cases explicitly, i.e., as a separate module. But many such systems resort to simplistic and/or ad-hoc strategies for alignment and in most such work, the alignment modules were not separately evaluated on alignment benchmarks, making their direct assessment difficult.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 121, |
|
"text": "(Hickl et al., 2006;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 146, |
|
"text": "Hickl and Bensley, 2007)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 196, |
|
"text": "(Das and Smith, 2009;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 218, |
|
"text": "Madnani et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 273, |
|
"text": "(B\u00e4r et al., 2012;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 290, |
|
"text": "Han et al., 2013", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "With the introduction of the MSR alignment corpus (Brockett, 2007) developed from the second Recognizing Textual Entailment challenge data (Bar-Haim et al., 2006) , direct evaluation and comparison of aligners became possible. The first aligner trained and evaluated on the corpus was a phrasal aligner called MANLI (MacCartney et al., 2008) . It represents alignments as sets of different edit operations (where a sequence of edits turns one input sentence into the other) and finds an optimal set of edits via a simulated annealing search. Weights of different edit features are learned from the training set of the MSR alignment corpus using a perceptron learning algorithm. MANLI incorporates only shallow features characterizing contextual similarity: relative positions of the two phrases being aligned (or not) in the two sentences and boolean features representing whether or not the preceding and following tokens of the two phrases are similar. Thadani and McKeown (2011) substituted MANLI's simulated annealing-based decoding with integer linear programming, and achieved a considerable speed-up. More importantly for our discussion, they found contextual evidence in the form of syntactic constraints useful in better aligning stop words. Thadani et al. (2012) further extended the model by adding features characterizing dependency arc edits, effectively bringing stronger influence of contextual similarity into alignment decisions. Again the performance improved consequently.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 66, |
|
"text": "(Brockett, 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 139, |
|
"end": 162, |
|
"text": "(Bar-Haim et al., 2006)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 341, |
|
"text": "MANLI (MacCartney et al., 2008)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 955, |
|
"end": 981, |
|
"text": "Thadani and McKeown (2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1251, |
|
"end": 1272, |
|
"text": "Thadani et al. (2012)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The most successful aligner to date both in terms of accuracy and speed, called JacanaAlign, was developed by Yao et al. (2013a) . In contrast to the earlier systems, JacanaAlign is a word aligner that formulates alignment as a sequence labeling problem. Each word in the source sentence is labeled with the corresponding target word index if an alignment is found. It employs a conditional random field to assign the labels and uses a feature set similar to MANLI's in terms of the information they encode (with some extensions). Contextual features include only semantic match of the left and the right neighbors of the two words and their POS tags. Even though JacanaAlign outperformed the MANLI enhancements despite having less contextual features, it is difficult to compare the role of context in the two models because of the large paradigmatic disparity. An extension of JacanaAlign was proposed for phrasal alignments in (Yao et al., 2013b) , but the contextual features remained largely unchanged.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 128, |
|
"text": "Yao et al. (2013a)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 930, |
|
"end": 949, |
|
"text": "(Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Noticeable in all the above systems is the use of contextual evidence as a feature for alignment, but in our opinion, not to an extent sufficient to harness its full potential. Even though deeper dependencybased modeling of contextual commonalities can be found in some other studies (Kouylekov and Magnini, 2005; Chambers et al., 2007; Chang et al., 2010; Yao et al., 2013c) , we believe there is further scope for systematic exploitation of contextual evidence for alignment, which we aim to do in this work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 313, |
|
"text": "(Kouylekov and Magnini, 2005;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 336, |
|
"text": "Chambers et al., 2007;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 356, |
|
"text": "Chang et al., 2010;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 375, |
|
"text": "Yao et al., 2013c)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "On the contrary, word semantic similarity has been a central component of most aligners; various measures of word similarity have been utilized, including string similarity, resource-based similarity (derived from one or more lexical resources like WordNet) and distributional similarity (computed from word co-occurrence statistics in large corpora). An important trade-off between precision, coverage and speed exists here and aligners commonly rely on only a subset of these measures (Thadani and McKeown, 2011; Yao et al., 2013a) . We use the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) , which is a large resource of lexical and phrasal paraphrases constructed using bilingual pivoting (Bannard and Callison-Burch, 2005 ) over large parallel corpora.", |
|
"cite_spans": [ |
|
{ |
|
"start": 487, |
|
"end": 514, |
|
"text": "(Thadani and McKeown, 2011;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 533, |
|
"text": "Yao et al., 2013a)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 574, |
|
"end": 601, |
|
"text": "(Ganitkevitch et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 735, |
|
"text": "(Bannard and Callison-Burch, 2005", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our system operates as a pipeline of alignment modules that differ in the types of word pairs they align. Figure 1 is a simplistic representation of the pipeline. Each module makes use of contextual evidence to make alignment decisions. In addition, the last two modules are informed by a word semantic similarity algorithm. Because of their phrasal nature, we treat named entities separately from other content words. The rationale behind the order in which the modules are arranged is discussed later in this section (3.3.5).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 106, |
|
"end": 114, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Before discussing each alignment module in detail, we describe the system components that identify word and contextual similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The ability to correctly identify semantic similarity between words is crucial to our aligner, since contextual evidence is important only for similar words. Instead of treating word similarity as a continuous variable, we define three levels of similarity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Similarity", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The first level is an exact word or lemma match which is represented by a similarity score of 1. The second level represents semantic similarity between two terms which are not identical. To identify such word pairs, we employ the Paraphrase Database (PPDB) 1 . We use the largest (XXXL) of the PPDB's lexical paraphrase packages and treat all pairs identically by ignoring the accompanying statistics. We customize the resource by removing pairs of identical words or lemmas and adding lemmatized forms of the remaining pairs. For now, we use the term ppdbSim to refer to the similarity of each word pair in this modified version of PPDB (which is a value in (0, 1)) and later explain how we determine it (Section 3.3.5). Finally, any pair of different words which is absent in PPDB is assigned a zero similarity score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Similarity", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our alignment modules collect contextual evidence from two complementary sources: syntactic dependencies and words occurring within a small textual vicinity of the two words to be aligned. The application of each kind assumes a common principle of minimal evidence. Formally, given two input sentences S and T , we consider two words s \u2208 S and t \u2208 T to form a candidate pair for alignment if \u2203r s \u2208 S and \u2203r t \u2208 T such that:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "1. (s, t) \u2208 Sim and (r s , r t ) \u2208 Sim , where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Sim is a binary relation indicating sufficient semantic relatedness between the members of each pair (\u2265 ppdbSim in our case).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2. (s, r s ) \u2208 C 1 and (t, r t ) \u2208 C 2 , such that C 1 \u2248 C 2 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where C 1 and C 2 are binary relations representing specific types of contextual relationships between two words in a sentence (e.g., an nsubj dependency between a verb and a noun). The symbol \u2248 represents equivalence between two relationships, including identicality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Note that the minimal-evidence assumption holds a single piece of contextual evidence as sufficient support for a potential alignment; but as we discuss later in this section, an evidence for word pair (s, t) (where s \u2208 S and t \u2208 S) may not lead to an alignment if there exists a competing pair (s , t) or (s, t ) with more evidence (where s \u2208 S and t \u2208 T ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In the rest of this section, we elaborate the different forms of contextual relationships we exploit along with the notion of equivalence between relationships.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extracting Contextual Evidence", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Dependencies can be important sources of contextual evidence. Two nsubj children r s and r t of two verbs s \u2208 S and t \u2208 T , for example, provide evidence for not only an (s, t) alignment, but S: He wrote a book . Moreover, dependency types can exhibit equivalence; consider the two sentences in Figure 2 . The dobj dependency in S is equivalent to the rcmod dependency in T (dobj \u2248 rcmod, following our earlier notation) since they represent the same semantic relation between identical word pairs in the two sentences. To be able to use such evidence for alignment, we need to go beyond exact matching of dependencies and develop a mapping among dependency types that encodes such equivalence. Note also that the parent-child roles are opposite for the two dependency types in the above example, a scenario that such a mapping must accommodate.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 295, |
|
"end": 303, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The four possible such scenarios regarding parentchild orientations are shown in Figure 3 . If (s, t) \u2208", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 89, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Sim and (r s , r t ) \u2208 Sim (represented by bidirectional arrows), then each orientation represents a set of possible ways in which the S and T dependencies (unidirectional arrows) can provide evidence of similarity between the contexts of s in S and t in T . Each such set comprises equivalent dependency type pairs for that orientation. In the example of Figure 2 , (dobj, rcmod) is such a pair for orientation (c), given s = t = \"wrote\" and r s = r t = \"book\".", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 356, |
|
"end": 364, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "We apply the notion of dependency type equivalence to intra-category alignment of content words in four major lexical categories: verbs, nouns, adjectives and adverbs (the Stanford POS tagger (Toutanova et al., 2003) is used to identify the categories). Table 1 shows dependency type equivalences for each lexical category of s and t.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 216, |
|
"text": "(Toutanova et al., 2003)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 261, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "The '\u2190' sign on column 5 of some rows represents a duplication of the column 4 content of the same row. For each row, columns 4 and 5 show two sets of dependency types; each member of the first is equivalent to each member of the second for the current orientation (column 1) and lexical categories of the associated words (columns 2 and 3). For example, row 2 represents the fact that an agent relation (between s and r s ; s is the parent) is equivalent to an nsubj relation (between t and r t ; t is the parent).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "Note that the equivalences are fundamentally redundant across different orientations. For example, row 2 (which is presented as an instance of orientation (a)) can also be presented as an instance of orientation (b) with POS(s)=POS(t)=noun and POS(r s )=POS(r t )=verb. We avoid such redundancy for compactness. For the same reason, the equivalence of dobj and rcmod in Figure 2 is shown in the table only as an instance of orientation (c) and not as an instance of orientation (d) (in general, this is why orientations (b) and (d) are absent in the table).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 378, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "We present dependency-based contextual evidence extraction in Algorithm 1. (The Stanford dependency parser (de Marneffe et al., 2006) is used to extract the dependencies.) Given a word pair (s i , t j ) from the input sentences S and T , it collects contextual evidence (as indexes of r s i and r t j with a positive similarity) for each matching row in Table 1 . An exact match of the two dependencies is also considered a piece of evidence. Note that Table 1 only ", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 133, |
|
"text": "(de Marneffe et al., 2006)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 361, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 465, |
|
"text": "Table 1 only", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "considers con- tent word pairs (s i , t j ) such that POS(s i )=POS(t j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": ", but as 90% of all content word alignments in the MSR alignment dev set are within the same lexical category, this is a reasonable set to start with.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Dependencies", |
|
"sec_num": "3.2.1" |
|
}, |
|
{ |
|
"text": "While equivalent dependencies can provide strong contextual evidence, they can not ensure high recall because, a) the ability to accurately extract depen- ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1: depContext(S, T, i, j, EQ)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Input: 1. S, T : Sentences to be aligned 2. i: Index of a word in S 3. j: Index of a word in T 4. EQ: Dependency type equivalences (Table 1 )", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 139, |
|
"text": "(Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Output: context = {(k, l)}: pairs of word indexes context \u2190 {(k, l) : wordSim(s k , t l ) > 0 1 \u2227 (i, k, \u03c4 s ) \u2208 dependencies(S) 2 \u2227 (j, l, \u03c4 t ) \u2208 dependencies(T ) 3 \u2227 P OS(s i ) = P OS(t j ) \u2227 P OS(s k ) = P OS(t l ) 4 \u2227 (\u03c4 s = \u03c4 t 5 \u2228 (P OS(s i ), P OS(s k ), \u03c4 s , \u03c4 t ) \u2208 EQ))} 6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "dencies is limited by the accuracy of the parser, and b) we investigate equivalence types for only interlexical-category alignment in this study. Therefore we apply an additional model of word context: the textual neighborhood of s in S and t in T .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Extraction of contextual evidence for content words from textual neighborhood is described in Algorithm 2. Like the dependency-based module, it accumulates evidence for each (s i , t j ) pair by inspecting multiple pairs of neighboring words. But instead of aligning only words within a lexical category,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "Algorithm 2: textContext(S, T, i, j, STOP) Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "1. S, T : Sentences to be aligned 2. i: Index of a word in S 3. j: Index of a word in T 4. STOP: A set of stop words Output: context = {(k, l)}: pairs of word indexes", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "C i \u2190 {k : k \u2208 [i \u2212 3, i + 3] \u2227 k = i \u2227 s k \u2208 STOP} 1 C j \u2190 {l : l \u2208 [j \u2212 3, j + 3] \u2227 l = j \u2227 t l \u2208 STOP} 2 context \u2190 C i \u00d7 C j 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "this module also performs inter-category alignment, considering content words within a (3, 3) window of s i and t j as neighbors. We implement relational equivalence (\u2248) here by holding any two positions within the window equally contributive and mutually comparable as sources of contextual evidence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Textual Neighborhood", |
|
"sec_num": "3.2.2" |
|
}, |
|
{ |
|
"text": "We now describe each alignment module in the pipeline and their sequence of operation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Alignment Algorithm", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The presence of a common word sequence in S and T is indicative of an (a) identical, and (b) con-textually similar word in the other sentence for each word in the sequence. We observe the results of aligning identical words in such sequences of length n containing at least one content word. This simple heuristic demonstrates a high precision (\u2248 97%) on the MSR alignment dev set for n \u2265 2, and therefore we consider membership in such sequences as the simplest form of contextual evidence in our system and align all identical word sequence pairs in S and T containing at least one content word. From this point on, we will refer to this module as wsAlign.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Identical Word Sequences", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "We align named entities separately to enable the alignment of full and partial mentions (and acronyms) of the same entity. We use the Stanford Named Entity Recognizer (Finkel et al., 2005) to identify named entities in S and T . After aligning the exact term matches, any unmatched term of a partial mention is aligned to all terms in the full mention. The module recognizes only first-letter acronyms and aligns an acronym to all terms in the full mention of the corresponding name.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 188, |
|
"text": "(Finkel et al., 2005)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Since named entities are instances of nouns, named entity alignment is also informed by contextual evidence (which we discuss in the next section), but happens before alignment of other generic content words. Parents (or children) of a named entity are simply the parents (or children) of its head word. We will refer to this module as a method named neAlign from this point on.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Named Entities", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "Extraction of contextual evidence for promising content word pairs has already been discussed in Section 3.2, covering both dependency-based context and textual context. Algorithm 3 (cwDepAlign) describes the dependency-based alignment process. For each potentially alignable pair (s i , t j ), the dependencybased context is extracted as described in Algorithm 1, and context similarity is calculated as the sum of the word similarities of the (s k , t l ) context word pairs (lines 2-7). (The wordSim method returns a similarity score in {0, ppdbSim, 1}.) The alignment score of the (s i , t j ) pair is then a weighted sum of word and contextual similarity (lines 8-12). (We discuss how the weights are set in Section Algorithm 3: cwDepAlign(S, T, EQ, A E , w, STOP) Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "1. S, T : Sentences to be aligned 2. EQ: Dependency type equivalences (Table 1) 3. A E : Already aligned word pair indexes 4. w: Weight of word similarity relative to contextual similarity 5. STOP: A set of stop words Output: A = {(i, j)}: word index pairs of aligned words {(s i , t j )} where s i \u2208 S and t j \u2208 T", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 79, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "\u03a8 \u2190 \u2205; \u039b \u03a8 \u2190 \u2205; \u03a6 \u2190 \u2205 1 for s i \u2208 S, t j \u2208 T do 2 if s i \u2208 STOP \u2227 \u00ac\u2203t l : (i, l) \u2208 A E 3 \u2227 t j \u2208 STOP \u2227 \u00ac\u2203s k : (k, j) \u2208 A E 4 \u2227 wordSim(s i , t j ) > 0 then 5 context \u2190 depContext(S, T, i, j, EQ) 6 contextSim \u2190 (k,l)\u2208context wordSim(s k , t l ) 7 if contextSim > 0 then 8 \u03a8 \u2190 \u03a8 \u222a {(i, j)} 9 \u039b \u03a8 (i, j) \u2190 context 10 \u03a6(i, j) \u2190 w * wordSim(s i , t j ) 11 +(1 \u2212 w) * contextSim 12", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Linearize and sort \u03a8 in decreasing order of", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "\u03a6(i, j) 13 A \u2190 \u2205 14 for (i, j) \u2208 \u03a8 do 15 if \u00ac\u2203l : (i, l) \u2208 A 16 \u2227\u00ac\u2203k : (k, j) \u2208 A then 17 A \u2190 A \u222a {(i, j)} 18 for (k, l) \u2208 \u039b \u03a8 (i, j) do 19 if \u00ac\u2203q : (k, q) \u2208 A \u222a A E 20 \u2227\u00ac\u2203p : (p, l) \u2208 A \u222a A E then 21 A \u2190 A \u222a {(k, l)} 22", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "3.3.5.) The module then aligns (s i , t j ) pairs with non-zero evidence in decreasing order of this score (lines 13-18). In addition, it aligns all the pairs that contributed contextual evidence for the (s i , t j ) alignment (lines 19-22). Note that we implement a one-to-one alignment whereby a word gets aligned at most once within the module.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Algorithm 4 (cwTextAlign) presents alignment based on similarities in the textual neighborhood. For each potentially alignable pair (s i , t j ), Algorithm 2 is used to extract the context, which is a set of neighboring content word pairs (lines 2-7). The contextual similarity is the sum of the similarities of these pairs Algorithm 4: cwT extAlign(S, T, A E , w, STOP) Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "1. S, T : Sentences to be aligned 2. A E : Existing alignments by word indexes 3. w: Weight of word similarity relative to contextual similarity 4. STOP: A set of stop words Output: A = {(i, j)}: word index pairs of aligned words {(s i , t j )} where s i \u2208 S and t j \u2208 T", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "\u03a8 \u2190 \u2205; \u03a6 \u2190 \u2205 1 for s i \u2208 S, t j \u2208 T do 2 if s i \u2208 STOP \u2227 \u00ac\u2203t l : (i, l) \u2208 A E 3 \u2227 t j \u2208 STOP \u2227 \u00ac\u2203s k : (k, j) \u2208 A E 4 \u2227 wordSim(s i , t j ) > 0 then 5 \u03a8 \u2190 \u03a8 \u222a {(i, j)} 6 context \u2190 textContext(S, T, i, j, STOP) 7 contextSim \u2190 (k,l)\u2208context wordSim(s k , t l ) 8 \u03a6(i, j) \u2190 w * wordSim(s i , t j ) 9 + (1 \u2212 w) * contextSim 10", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Linearize and sort \u03a8 in decreasing order of \u03a6(i, j)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "11 A \u2190 \u2205 12 for (i, j) \u2208 \u03a8 do 13 if \u00ac\u2203l : (i, l) \u2208 A 14 \u2227\u00ac\u2203k : (k, j) \u2208 A then 15 A \u2190 A \u222a {(i, j)} 16", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "(line 8), and the alignment score is a weighted sum of word similarity and contextual similarity (lines 9, 10). The alignment score is then used to make one-to-one word alignment decisions (lines 11-16). Considering textual neighbors as weaker sources of evidence, we do not align the neighbors. Note that in cwTextAlign we also align semantically similar content word pairs (s i , t j ) with no contextual similarities if no pairs (s k , t j ) or (s i , t l ) exist with a higher alignment score. This is a consequence of our observation of the MSR alignment dev data, where we find that more often than not content words are inherently sufficiently meaningful to be aligned even in the absence of contextual evidence when there are no competing pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "The content word alignment module is thus itself a pipeline of cwDepAlign and cwTextAlign.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Content Words", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "We follow the contextual evidence-based approach to align stop words as well, some of which get aligned Algorithm 5: align(S, T, EQ, w, STOP) Input:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "1. S, T : Sentences to be aligned 2. EQ: Dependency type equivalences (Table 1) as part of word sequence alignment as discussed in Section 3.3.1, and neighbor alignment as discussed in Section 3.3.3. For the rest we utilize dependencies and textual neighborhoods as before, with three adjustments. Firstly, since stop word alignment is the last module in our pipeline, rather than considering all semantically similar word pairs for contextual similarity, we consider only aligned pairs. Secondly, since many stop words (e.g. determiners, modals) typically demonstrate little variation in the dependencies they engage in, we ignore type equivalences for stop words and implement only exact matching of dependencies. (Stop words in general can participate in equivalent dependencies, but we leave constructing a corresponding mapping for future work.) Finally, for textual neighborhood, we separately check alignments of the left and the right neighbors and aggregate the evidences to determine alignment -again due to the primarily syntactic nature of interaction of stop words with their neighbors.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 79, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "Thus stop words are also aligned in a sequence of dependency and textual neighborhood-based alignments. We assume two corresponding modules named swDepAlign and swTextAlign, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stop Words", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "Our full alignment pipeline is shown as the method align in Algorithm 5. Note that the strict order of the alignment modules limits the scope of downstream modules since each such module discards any word that has already been aligned by an earlier module (this is accomplished via the variable A; the corresponding parameter in Algorithms 3 and 4 is A E ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Algorithm", |
|
"sec_num": "3.3.5" |
|
}, |
|
{ |
|
"text": "The rationales behind the specific order of the modules can now be explained: (1) wsAlign is a module with relatively higher precision, (2) it is convenient to align named entities before other content words to enable alignment of entity mentions of different lengths, (3) dependency-based evidence was observed to be more reliable (i.e. of higher precision) than textual evidence in the MSR alignment dev set, and (4) stop word alignments are dependent on existing content word alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Algorithm", |
|
"sec_num": "3.3.5" |
|
}, |
|
{ |
|
"text": "The aligner assumes two free parameters: ppdbSim and w (in Algorithms 3 and 4). To determine their values, we exhaustively search through the two-dimensional space (ppdbSim, w) for ppdbSim, w \u2208 {0.1, ..., 0.9, 1} and the combination (0.9, 0.9) yields the best F 1 score for the MSR alignment dev set. We use these values for the aligner in all of its subsequent applications.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Algorithm", |
|
"sec_num": "3.3.5" |
|
}, |
|
{ |
|
"text": "We evaluate the performance of our aligner both intrinsically and extrinsically on multiple corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The MSR alignment dataset 2 (Brockett, 2007) was designed to train and directly evaluate automated aligners. Three annotators individually aligned words and phrases in 1600 pairs of premise and hypothesis sentences from the RTE2 challenge data (divided into dev and test sets, each consisting of 800 sentences). The dataset has subsequently been used to assess several top performing aligners (MacCartney et al., 2008; Thadani and McKeown, 2011; Yao et al., 2013a; Yao et al., 2013b) . We use the test set for evaluation in the same manner as these studies: (a) we apply majority rule to select from the three sets of annotations for each sentence and discard threeway disagreements, (b) we evaluate only on the sure links (word pairs that annotators mentioned should certainly be aligned, as opposed to possible links).", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 44, |
|
"text": "(Brockett, 2007)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 418, |
|
"text": "(MacCartney et al., 2008;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 445, |
|
"text": "Thadani and McKeown, 2011;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 464, |
|
"text": "Yao et al., 2013a;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 483, |
|
"text": "Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We test the generalizability of the aligner by evaluating it, unchanged (i.e. with identical parameter values), on a second alignment corpus: the Edin- (Thadani et al., 2012) corpus. The test set consists of 306 pairs; each pair is aligned by at most two annotators and we adopt the random selection policy described in (Thadani et al., 2012) to resolve disagreements. Table 2 shows the results. For each corpus, it shows precision (% alignments that matched with gold annotations), recall (% gold alignments discovered by the aligner), F 1 score and the percentage of sentences that received the exact gold alignments (denoted by E) from the aligner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 174, |
|
"text": "(Thadani et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 342, |
|
"text": "(Thadani et al., 2012)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 376, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "On the MSR test set, our aligner shows a 3.1% improvement in F 1 score over the previous best system (Yao et al., 2013a ) with a 27.2% error reduction. Importantly, it demonstrates a considerable increase in recall without a loss of precision. The E score also increases as a consequence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 101, |
|
"end": 119, |
|
"text": "(Yao et al., 2013a", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "On the Edinburgh++ test set, our system achieves a 1.2% increase in F 1 score (an error reduction of 8.8%) over the previous best system (Yao et al., 2013a) , with improvements in both precision and recall. This is a remarkable result that demonstrates the general applicability of the aligner, as no parameter tuning took place.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 156, |
|
"text": "(Yao et al., 2013a)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Intrinsic Evaluation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We perform a set of ablation tests to assess the importance of the aligner's individual components. Each row of Table 3 beginning with (-) shows a feature excluded from the aligner and two associated sets of metrics, showing the performance of the resulting algorithm on the two alignment corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 119, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "Without a word similarity module, recall drops as would be expected. Without contextual evidence (word sequences, dependencies and textual neighbors) precision drops considerably and recall also falls. Without dependencies, the aligner still gives state-of-the-art results, which points to the possibility of a very fast yet high-performance aligner. Without evidence from textual neighbors, however, the precision of the aligner suffers badly. Textual neighbors find alignments across different lexical categories, a type of alignment that is currently not supported by our dependency equivalences. Extending the set of dependency equivalences might alleviate this issue. Finally, without stop words (i.e. while aligning content words only) recall drops just a little for each corpus, which is expected as content words form the vast majority of non-identical word alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Ablation Test", |
|
"sec_num": "4.1.1" |
|
}, |
|
{ |
|
"text": "We extrinsically evaluate our system on textual similarity identification and paraphrase detection. Here we discuss each task and the results of evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extrinsic Evaluation", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Given two short input text fragments (commonly sentences), the goal of this task is to identify the degree to which the two fragments are semantically similar. The *SEM 2013 STS task (Agirre et al., 2013) assessed a number of STS systems on four test datasets by comparing their output scores to human annotations. Pearson correlation coefficient with human annotations was computed individually for each test set and a weighted sum of the correlations was used as the final evaluation metric (the weight for each dataset was proportional to its size).", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 204, |
|
"text": "(Agirre et al., 2013)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Textual Similarity", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We apply our aligner to the task by aligning each sentence pair and taking the proportion of content words aligned in the two sentences (by normalizing with the harmonic mean of their number of content words) as a proxy of their semantic similarity. Only three of the four STS 2013 datasets are freely available 4 (headlines, OnWN, and FNWN), which we use for our experiments (leaving out the SMT dataset).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Textual Similarity", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "Correl.% Rank These three sets contain 1500 annotated sentence pairs in total. Table 4 shows the results. The first row shows the performance of the top system in the task. With a direct application of our aligner (no parameter tuning), our STS algorithm acheives a 67.15% weighted correlation, which would earn it the 7th rank among 90 participating systems. Considering the fact that alignment is one of many components of STS, this result is truly promising.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 86, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For comparison, we also evaluate the previous best aligner named JacanaAlign (Yao et al., 2013a) on STS 2013 data (the JacanaAlign public release 5 is used, which is a version of the original aligner with extra lexical resources). We apply three different values derived from its output as proxies of semantic similarity: a) aligned content word proportion, b) the Viterbi decoding score, and c) the normalized decoding score. Of the three, (b) gives the best results, which we show in row 2 of Table 4 . Our aligner outperforms JacanaAlign by a large margin.", |
|
"cite_spans": [ |
|
{ |
|
"start": 77, |
|
"end": 96, |
|
"text": "(Yao et al., 2013a)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 495, |
|
"end": 502, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "System", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The goal of paraphrase identification is to decide if two sentences have the same meaning. The output is a yes/no decision instead of a real-valued similarity score as in STS. We use the MSR paraphrase corpus 6 (4076 dev pairs, 1725 test pairs) (Dolan et al., 2004) to evaluate our aligner and compare with other aligners. Following earlier work (MacCartney et al., 2008; Yao et al., 2013b) , we use a normalized alignment score of the two sentences to make a decision based on a threshold which we set using the dev set. Alignments with a higher-than-threshold score are taken to be paraphrases and the rest non-paraphrases.", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 265, |
|
"text": "(Dolan et al., 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 346, |
|
"end": 371, |
|
"text": "(MacCartney et al., 2008;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 372, |
|
"end": 390, |
|
"text": "Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Paraphrase Identification", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Again, this is an oversimplified application of the aligner, even more so than in STS, since a small change in linguistic properties of two sentences (e.g. polarity or modality) can turn them into non-System Acc.% P% R% F1% Madnani et al. (2012) 77.4 79.0 89.9 84.1 Yao et al. (2013a) 70.0 72.6 88.1 79.6 Yao et al. (2013b) 68.1 68.6 95.8 79.9 This Work 73.4 76.6 86.4 81.2 Table 5 : Extrinsic evaluation on MSR paraphrase data paraphrases despite having a high degree of alignment. So the aligner was not expected to demonstrate state-of-the-art performance, but still it gets close as shown in Table 5 . The first column shows the accuracy of each system in classifying the input sentences into one of two classes: true (paraphrases) and false (non-paraphrases). The rest of the columns show the performance of the system for the true class in terms of precision, recall, and F 1 score. Italicized numbers represent scores that were not reported by the authors of the corresponding papers, but have been reconstructed from the reported data (and hence are likely to have small precision errors). The first row shows the best performance by any system on the test set to the best of our knowledge. The next two rows show the performances of two state-of-the-art aligners (performances of both systems were reported in (Yao et al., 2013b) ). The last row shows the performance of our aligner. Although it does worse than the best paraphrase system, it outperforms the other aligners.", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 245, |
|
"text": "Madnani et al. (2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 284, |
|
"text": "Yao et al. (2013a)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 305, |
|
"end": 323, |
|
"text": "Yao et al. (2013b)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1319, |
|
"end": 1338, |
|
"text": "(Yao et al., 2013b)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 374, |
|
"end": 381, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 596, |
|
"end": 603, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Paraphrase Identification", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Our experiments reveal that a word aligner based on simple measures of lexical and contextual similarity can demonstrate state-of-the-art accuracy. However, as alignment is frequently a component of larger tasks, high accuracy alone is not always sufficient. Other dimensions of an aligner's usability include speed, consumption of computing resources, replicability, and generalizability to different applications. Our design goals include achieving a balance among such multifarious and conflicting goals.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A speed advantage of our aligner stems from formulating the problem as one-to-one word alignment and thus avoiding an expensive decoding phase. The presence of multiple phases is offset by discarding already aligned words in subsequent phases. The use of PPDB as the only (hashable) word similarity resource helps in reducing latency as well as space requirements. As shown in Section 4.1.1, further speedup could be achieved with only a small performance degradation by considering only the textual neighborhood as source of contextual evidence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "However, the two major goals that we believe the aligner achieves to the greatest extent are replicability and generalizability. The easy replicability of the aligner stems from its use of only basic and frequently used NLP modules (a lemmatizer, a POS tagger, an NER module, and a dependency parser: all available as part of the Stanford CoreNLP suite 7 ; we use a Python wrapper 8 ) and a single word similarity resource (PPDB).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We experimentally show that the aligner can be successfully applied to different alignment datasets as well as multiple end tasks. We believe a design characteristic that enhances the generalizability of the aligner is its minimal dependence on the MSR alignment training data, which originates from a textual entailment corpus having unique properties such as disparities in the lengths of the input sentences and a directional nature of their relationship (i.e., the premise implying the hypothesis, but not vice versa). A related potential reason is the symmetry of the aligner's output (caused by its assumption of no directionality) -the fact that it outputs the same set of alignments regardless of the order of the input sentences, in contrast to most existing aligners.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Major limitations of the aligner include the inability to align phrases, including multiword expressions. It is incapable of capturing and exploiting long distance dependencies among words (e.g. coreferences). No word similarity resource is perfect and PPDB is no exception, therefore certain word alignments also remain undetected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We show how contextual evidence can be used to construct a monolingual word aligner with certain desired properties, including state-of-the-art accuracy, easy replicability, and high generalizability. Some potential avenues for future work include: allowing phrase-level alignment via phrasal similarity resources (e.g. the phrasal paraphrases of PPDB), including other sources of similarity such as vector space models or WordNet relations, expanding the set of dependency equivalences and/or using semantic role equivalences, and formulating our alignment algorithm as objective optimization rather than greedy search.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The aligner is available for download at https://github.com/ma-sultan/ monolingual-word-aligner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 219-230. Action Editor: Alexander Koller. Submitted 11/2013; Revised 1/2014; Published 5/2014. c 2014 Association for Computational Linguistics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://paraphrase.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.ling.ohio-state.edu/\u223cscott/#edinburgh-plusplus", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ixa2.si.ehu.es/sts/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://code.google.com/p/jacana/ 6 http://research.microsoft.com/en-us/downloads/607d14d9-20cd-47e3-85bc-a2f65cd28042/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://nlp.stanford.edu/downloads/corenlp.shtml 8 https://github.com/dasmith/stanford-corenlp-python", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This material is based in part upon work supported by the National Science Foundation under Grant Numbers EHR/0835393 and EHR/0835381. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "*SEM 2013 shared task: Semantic Textual Similarity", |
|
"authors": [ |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mona", |
|
"middle": [], |
|
"last": "Diab", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aitor", |
|
"middle": [], |
|
"last": "Gonzalez-Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "32--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic Textual Similarity. In Proceedings of the Second Joint Conference on Lexical and Compu- tational Semantics. Association for Computational Linguistics, 32-43.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Paraphrasing with Bilingual Parallel Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Bannard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "597--604", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with Bilingual Parallel Corpora. In Proceed- ings of the 43rd Annual Meeting on Association for Computational Linguistics. Association for Computa- tional Linguistics, 597-604.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "UKP: computing semantic textual similarity by combining multiple content similarity measures", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "B\u00e4r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Biemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Torsten", |
|
"middle": [], |
|
"last": "Zesch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "435--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel B\u00e4r, Chris Biemann, Iryna Gurevych, and Torsten Zesch. 2012. UKP: computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. In Proceedings of the First Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, 435-440.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Second PASCAL Recognising Textual Entailment Challenge", |
|
"authors": [ |
|
{ |
|
"first": "Roy", |
|
"middle": [], |
|
"last": "Bar-Haim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lisa", |
|
"middle": [], |
|
"last": "Ferro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Giampiccolo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Idan", |
|
"middle": [], |
|
"last": "Szpektor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of The Second PASCAL Recognising Textual Entailment Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The Second PASCAL Recognising Textual En- tailment Challenge. In Proceedings of The Second PASCAL Recognising Textual Entailment Challenge.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Aligning the RTE 2006 Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Brockett. 2007. Aligning the RTE 2006 Cor- pus. Technical Report MSR-TR-2007-77, Microsoft Research.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Learning alignments and leveraging natural logic", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chloe", |
|
"middle": [], |
|
"last": "Kiddon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ramage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "165--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, Marie-Catherine de Marneffe, Daniel Ramage, Eric Yeh, and Christopher D Manning. 2007. Learning alignments and leverag- ing natural logic. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing As- sociation for Computational Linguistics, 165-170.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Discriminative Learning over Constrained Latent Representations", |
|
"authors": [ |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Goldwasser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "429--437", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ming-Wei Chang, Dan Goldwasser, Dan Roth, and Vivek Srikumar. 2010. Discriminative Learning over Con- strained Latent Representations. In Proceedings of the 2010 Annual Conference of the North American Chap- ter of the Association for Computational Linguistics Association for Computational Linguistics, 429-437.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Paraphrase Identication as Probabilistic Quasi-Synchronous Recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das and Noah A. Smith. 2009. Paraphrase Iden- tication as Probabilistic Quasi-Synchronous Recogni- tion. In Proceedings of the Joint Conference of the 47th", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "468--476", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Association for Computational Linguistics, 468-476.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Generating Typed Dependency Parses from Phrase Structure Parses", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "449--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the International Conference on Lan- guage Resources and Evaluation. 449-454.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Stanford typed dependencies manual", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. Stanford typed dependencies manual. Technical Report, Stanford University.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "350--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised Construction of Large Paraphrase Corpora: Exploiting Massively Parallel News Sources. In Pro- ceedings of the International Conference on Compu- tational Linguistics. Association for Computational Linguistics, 350-356.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling", |
|
"authors": [ |
|
{ |
|
"first": "Jenny", |
|
"middle": [ |
|
"Rose" |
|
], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "363--370", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Man- ning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proceedings of the 43rd Annual Meeting of the Associ- ation for Computational Linguistics. Association for Computational Linguistics, 363-370.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "PPDB: The Paraphrase Database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Com- putational Linguistics. Association for Computational Linguistics, 758-764.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "UMBC EBIQUITY-CORE: Semantic Textual Similarity Systems", |
|
"authors": [ |
|
{ |
|
"first": "Lushan", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhay", |
|
"middle": [], |
|
"last": "Kashyap", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Finin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Mayeld", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Second Joint Conference on Lexical and Computational Semantics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "44--52", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lushan Han, Abhay Kashyap, Tim Finin, James Mayeld, and Jonathan Weese. 2013. UMBC EBIQUITY-CORE: Semantic Textual Similarity Systems. In Proceedings of the Second Joint Conference on Lexical and Compu- tational Semantics, Volume 1. Association for Compu- tational Linguistics, 44-52.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A Discourse Commitment-Based Framework for Recognizing Textual Entailment", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Bensley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "171--176", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Hickl and Jeremy Bensley. 2007. A Discourse Commitment-Based Framework for Recognizing Tex- tual Entailment. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. As- sociation for Computational Linguistics, 171-176.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Recognizing Textual Entailment with LCCs GROUNDHOG System", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Hickl", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Bensley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kirk", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Rink", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Second PASCAL Challenges Workshop on Recognizing Textual Entailment", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Hickl, Jeremy Bensley, John Williams, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recog- nizing Textual Entailment with LCCs GROUNDHOG System. In Proceedings of the Second PASCAL Chal- lenges Workshop on Recognizing Textual Entailment.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Recognizing textual entailment with tree edit distance algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Milen", |
|
"middle": [], |
|
"last": "Kouylekov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernardo", |
|
"middle": [], |
|
"last": "Magnini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the PASCAL Challenges Workshop: Recognising Textual Entailment Challenge", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "17--20", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Milen Kouylekov and Bernardo Magnini. 2005. Rec- ognizing textual entailment with tree edit distance al- gorithms. In Proceedings of the PASCAL Challenges Workshop: Recognising Textual Entailment Challenge 17-20.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Phrase-Based Alignment Model for Natural Language Inference", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "802--811", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill MacCartney, Michel Galley, and Christopher D. Man- ning. 2008. A Phrase-Based Alignment Model for Nat- ural Language Inference. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 802-811.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Re-examining Machine Translation Metrics for Paraphrase Identification", |
|
"authors": [ |
|
{ |
|
"first": "Nitin", |
|
"middle": [], |
|
"last": "Madnani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joel", |
|
"middle": [], |
|
"last": "Tetreault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [], |
|
"last": "Chodorow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of 2012 Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "182--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining Machine Translation Metrics for Paraphrase Identification. In Proceedings of 2012 Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics. Association for Computational Linguistics, 182-190.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Optimal and Syntactically-Informed Decoding for Monolingual Phrase-Based Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Kapil", |
|
"middle": [], |
|
"last": "Thadani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kapil Thadani and Kathleen McKeown. 2011. Optimal and Syntactically-Informed Decoding for Monolingual Phrase-Based Alignment. In Proceedings of the 49th", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Associa- tion for Computational Linguistics, 254-259.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "A Joint Phrasal and Dependency Model for Paraphrase Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Kapil", |
|
"middle": [], |
|
"last": "Thadani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of COLING 2012: Posters", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1229--1238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kapil Thadani, Scott Martin, and Michael White. 2012. A Joint Phrasal and Dependency Model for Paraphrase Alignment. In Proceedings of COLING 2012: Posters. 1229-1238.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Feature-rich Part-of-speech Tagging with a Cyclic Dependency Network In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "173--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003 Feature-rich Part-of-speech Tagging with a Cyclic Dependency Network In Pro- ceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Asso- ciation for Computational Linguistics. Association for Computational Linguistics, 173-180.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A Lightweight and High Performance Monolingual Word Aligner", |
|
"authors": [ |
|
{ |
|
"first": "Xuchen", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "702--707", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013a. A Lightweight and High Per- formance Monolingual Word Aligner. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics. Association for Computational Linguistics, 702-707.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Semi-Markov Phrase-based Monolingual Alignment", |
|
"authors": [ |
|
{ |
|
"first": "Xuchen", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "590--600", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013b. Semi-Markov Phrase-based Monolingual Alignment. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 590-600.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Answer Extraction as Sequence Tagging with Tree Edit Distance", |
|
"authors": [ |
|
{ |
|
"first": "Xuchen", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "858--867", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013c. Answer Extraction as Se- quence Tagging with Tree Edit Distance. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics. Association for Computational Linguistics, 858- 867.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "Figure 1: System overview", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "Equivalent dependency types: dobj and rcmod also an (r s , r t ) alignment if (s, t) \u2208 Sim and (r s , r t ) \u2208 Sim . (We adopt the Stanford typed dependencies (de Marneffe and Manning, 2008).)", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"text": "Parent-child orientations in dependencies", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "3. w: Weight of word similarity relative to contextual similarity 4. STOP: A set of stop words Output: A = {(i, j)}: word index pairs of aligned words {(s i , t j )} where s i \u2208 S and t j \u2208 T A \u2190 wsAlign(S, T ) 1 A \u2190 A \u222a neAlign(S, T, EQ, A, w) 2 A \u2190 A \u222a cwDepAlign(S, T, EQ, A, w, STOP) 3 A \u2190 A \u222a cwT extAlign(S, T, A, w, STOP) 4 A \u2190 A \u222a swDepAlign(S, T, A, w, STOP) 5 A \u2190 A \u222a swT extAlign(S, T, A, w, STOP) 6", |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table><tr><td/><td>System</td><td>P% R% F1% E%</td></tr><tr><td>MSR</td><td colspan=\"2\">MacCartney et al. (2008) Thadani & McKeown (2011) 89.5 86.2 87.8 33.0 85.4 85.3 85.3 21.3 Yao et al. (2013a) 93.7 84.0 88.6 35.3 Yao et al. (2013b) 92.1 82.8 86.8 29.1</td></tr><tr><td/><td>This Work</td><td>93.7 89.8 91.7 43.8</td></tr><tr><td>EDB++</td><td>Yao et al. (2013a) Yao et al. (2013b) This Work</td><td>91.3 82.0 86.4 15.0 90.4 81.9 85.9 13.7 93.5 82.5 87.6 18.3</td></tr></table>", |
|
"text": "http://www.cs.biu.ac.il/ nlp/files/RTE 2006 Aligned.zip", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Results of intrinsic evaluation on two datasets burgh++ 3", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Ablation test results", |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"content": "<table/>", |
|
"text": "Extrinsic evaluation on STS 2013 data", |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |