|
{ |
|
"paper_id": "D17-1001", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:16:53.824367Z" |
|
}, |
|
"title": "Monolingual Phrase Alignment on Parse Forests", |
|
"authors": [ |
|
{ |
|
"first": "Yuki", |
|
"middle": [], |
|
"last": "Arase", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Osaka University", |
|
"location": { |
|
"country": "Japan" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Junichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Manchester", |
|
"location": { |
|
"country": "UK" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We propose an efficient method to conduct phrase alignment on parse forests for paraphrase detection. Unlike previous studies, our method identifies syntactic paraphrases under linguistically motivated grammar. In addition, it allows phrases to non-compositionally align to handle paraphrases with non-homographic phrase correspondences. A dataset that provides gold parse trees and their phrase alignments is created. The experimental results confirm that the proposed method conducts highly accurate phrase alignment compared to human performance.", |
|
"pdf_parse": { |
|
"paper_id": "D17-1001", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We propose an efficient method to conduct phrase alignment on parse forests for paraphrase detection. Unlike previous studies, our method identifies syntactic paraphrases under linguistically motivated grammar. In addition, it allows phrases to non-compositionally align to handle paraphrases with non-homographic phrase correspondences. A dataset that provides gold parse trees and their phrase alignments is created. The experimental results confirm that the proposed method conducts highly accurate phrase alignment compared to human performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Paraphrase detection is crucial in various applications, which has been actively studied for years. Due to difficulties caused by the non-homographic nature of phrase correspondences, the units of correspondence in previous studies are defined as sequences of words like in (Yao et al., 2013) and not syntactic phrases. On the other hand, syntactic structures are important in modeling sentences, e.g., their sentiments and semantic similarities (Socher et al., 2013; Tai et al., 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 292, |
|
"text": "(Yao et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 446, |
|
"end": 467, |
|
"text": "(Socher et al., 2013;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 468, |
|
"end": 485, |
|
"text": "Tai et al., 2015)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we present an algorithm to align syntactic phrases in a paraphrased pair of sentences. We show that (1) the problem of identifying a legitimate set of syntactic paraphrases under linguistically motivated grammar is formalized, (2) dynamic programing a la CKY (Cocke, 1969; Kasami, 1965; Younger, 1967) makes phrase alignment computationally feasible, (3) alignment quality of phrases can be improved using n-best parse forests instead of 1-best trees, and (4) noncompositional alignment allows non-homographic correspondences of phrases. Motivated by recent Source: Whenever I go to the ground floor for a smoke, I always come face to face with them. Target: Whenever I go down to smoke a cigarette, I come face to face with one of them. findings that syntax is important for phrase embedding (Socher et al., 2013) in which phrasal paraphrases allow semantic similarity to be replicated (Wieting et al., 2016 (Wieting et al., , 2015 , we focus on the syntactic paraphrase alignment. Fig. 1 shows a real example of phrase alignments produced by our method. Alignment proceeds in a bottom-up manner using the compositional nature of phrase alignments. First, word alignments are given. Then, phrase alignments are recursively identified by supporting relations between phrase pairs. Non-compositional alignment is triggered when the compositionality is violated, which is common in paraphrasing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 274, |
|
"end": 287, |
|
"text": "(Cocke, 1969;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 301, |
|
"text": "Kasami, 1965;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 302, |
|
"end": 316, |
|
"text": "Younger, 1967)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 808, |
|
"end": 829, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 902, |
|
"end": 923, |
|
"text": "(Wieting et al., 2016", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 924, |
|
"end": 947, |
|
"text": "(Wieting et al., , 2015", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 998, |
|
"end": 1004, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For systematic research on syntactic phrase alignment in paraphrases, we constructed a gold standard dataset of paraphrase sentences with phrase alignment (20, 678 phrases in 201 paraphrasal sentences). This dataset will be made public for future research on paraphrase alignment. The experiment results show that our method achieves 83.64% and 78.91% in recall and precision in terms of alignment pairs, which are 92% and 89% of human performance, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Due to the large amount of sentence-level paraphrases collected (Dolan et al., 2004; Cohn et al., 2008; Heilman and Smith, 2010; Yin and Sch\u00fctze, 2015; Biran et al., 2016) , researchers can identify phrasal correspondences for natural language inferences (MacCartney et al., 2008; Thadani et al., 2012; Yao et al., 2013) . Current methods extend word alignments to phrases in accordance with the methods in statistical machine translation. However, phrases are defined as a simple sequence of words, which do not conform to syntactic phrases. PPDB (Ganitkevitch et al., 2013) provides syntactic paraphrases similar to synchronous context free grammar (SCFG). As discussed below, SCFG captures only a fraction of paraphrasing phenomenon.", |
|
"cite_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 84, |
|
"text": "(Dolan et al., 2004;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 103, |
|
"text": "Cohn et al., 2008;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 128, |
|
"text": "Heilman and Smith, 2010;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 129, |
|
"end": 151, |
|
"text": "Yin and Sch\u00fctze, 2015;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 152, |
|
"end": 171, |
|
"text": "Biran et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 255, |
|
"end": 280, |
|
"text": "(MacCartney et al., 2008;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 302, |
|
"text": "Thadani et al., 2012;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 303, |
|
"end": 320, |
|
"text": "Yao et al., 2013)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 548, |
|
"end": 575, |
|
"text": "(Ganitkevitch et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In terms of our approach, parallel parsing is a relevant area. Smith and Smith (2004) related monolingual parses in different languages using word alignments, while Burkett and Klein (2008) employed phrase alignments. Moreover, Das and Smith (2009) proposed a model that generates a paraphrase of a given sentence using quasi-synchronous dependency grammar (Smith and Eisner, 2006) . Since they used phrase alignments simply as features, there is no guarantee that the output alignments are legitimate.", |
|
"cite_spans": [ |
|
{ |
|
"start": 63, |
|
"end": 85, |
|
"text": "Smith and Smith (2004)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 165, |
|
"end": 189, |
|
"text": "Burkett and Klein (2008)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 228, |
|
"end": 248, |
|
"text": "Das and Smith (2009)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 357, |
|
"end": 381, |
|
"text": "(Smith and Eisner, 2006)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Synchronous rewriting in parallel parsing (Kaeshammer, 2013; Maillette de Buy Wenniger and Sima'an, 2013) derives parse trees that conform to discontinuous word alignments. In contrast, our method respects parse trees derived by linguistically motivated grammar while handling nonmonotonic phrase alignment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The synchronous assumption in parallel parsing has been argued to be too rigid to handle parallel sentence pairs or even paraphrasal sentence pairs. Burkett et al. (2010) proposed weakly synchronized parallel parsing to tackle this problem. Although this model increases the flexibility, the obtainable alignments are restricted to conform to inversion transduction grammar (ITG) (Wu, 1997) . Similarly, Choe and McClosky (2015) used dependency forests of paraphrasal sentence pairs and allowed disagreements to some extent. However, alignment quality was beyond their scope. Weese et al. (2014) extracted SCFG from paraphrase corpora. They showed that parsing was only successful in 9.1% of paraphrases, confirming that a significant amount of transformations in paraphrases do not conform to compositionality or ITG.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 170, |
|
"text": "Burkett et al. (2010)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 380, |
|
"end": 390, |
|
"text": "(Wu, 1997)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 595, |
|
"text": "Weese et al. (2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Source and target sentences \u03c4 Phrase in the parse tree \u03c4 R , \u03c4 \u2205 \u03c4 R is a phrase of a root node; \u03c4 \u2205 is a special phrase with the null span that exists in every parse tree \u03c6", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Phrase aligned to \u03c4 \u2205 \u2022, \u2022", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Pair of entities; a pair itself can be regarded as an entity", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "{\u2022} Set of entities m(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Derive the mother node of a phrase l(\u2022), r(\u2022) Derive the left and right child nodes, respectively ds(\u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Derive descendants of a node in-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "cluding self; \u03c4 \u2208 ds(\u03c4 ) lca(\u2022, \u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Derive the lowest common ancestor (LCA) of two phrases ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Explanation s, t", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this study, we formalize the problem of legitimate phrase alignment. For simplicity, we discuss tree alignment instead of forests using Fig. 2 as a running example. Table 1 describes the notation used in this paper. We call a paraphrased pair source sentence s and the other as target t. Superscripts of s and t represent the source and the target, respectively. Specifically, \u03c4 s , \u03c4 t is a pair of source and target phrases. We represent", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 145, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 175, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Formulation of Phrase Alignment", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "f 1 /f 2 / \u2022 \u2022 \u2022 /f i (\u2022) to abbre- viate f i (\u2022 \u2022 \u2022 f 2 (f 1 (\u2022)) \u2022 \u2022 \u2022 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "as an intuitive illustration. It should be noted that the order of the function symbols is reversed, e.g., l/r(\u03c4 ) (= r(l(\u03c4 ))) derives the right-child of the left-child node of \u03c4 , and l/ds(\u03c4 ) derives the left descendants of \u03c4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Notation", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "A possible parse tree alignment of s and t is represented as a set of aligned pairs of phrases { \u03c4 s i , \u03c4 t i }. \u03c4 s i and \u03c4 t i are the source and the target phrases that constitute the i-th alignment, respectively. Either \u03c4 s i or \u03c4 t i can be \u03c4 \u2205 when a phrase does not correspond to another sentence, which is called a null-alignment. Each phrase alignment can have support relations as: ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of a Legitimate Alignment", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Definition 3.1. A pair h i = \u03c4 s i , \u03c4", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of a Legitimate Alignment", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "l/ds(\u03c4 s i ), l/ds(\u03c4 t i ) , r/ds(\u03c4 s i ), r/ds(\u03c4 t i ) or l/ds(\u03c4 s i ), r/ds(\u03c4 t i ) , r/ds(\u03c4 s i ), l/ds(\u03c4 t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of a Legitimate Alignment", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "exists. Pre-terminal phrases are supported by the corresponding word alignments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of a Legitimate Alignment", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "R = \u21d2 that represent the order of support phrases. Specif- ically, l(\u03c4 s i ), l(\u03c4 t i ) , r(\u03c4 s i ), r(\u03c4 t i ) \u21d2 h i is straight while l(\u03c4 s i ), r(\u03c4 t i ) , r(\u03c4 s i ), l(\u03c4 t i ) R = \u21d2 h i is inverted. In Fig. 2, \u03c4 s m , \u03c4 t m , \u03c4 s n , \u03c4 t n \u21d2 h i , where \u03c4 s m = l/ds(\u03c4 s i ) and \u03c4 s n = r/ds(\u03c4 s i ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The number of all possible alignments in s and t, which is denoted as H, is exponential to the length. However, only its fraction constitutes legitimate parse tree alignments. For example, a subset in which the same phrase in s is aligned with multiple phrases in t, called competing alignments, is not legitimate as a parse tree alignment. The relationships among phrases in parse trees impose constraints on a subset to provide legitimacy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given word alignments W that provide the basis for the phrase alignment, its legitimate set W L \u2282 W should be 1-to-1 alignments. Starting with W L , a legitimate set of phrase alignments H L (\u2282 H) with an accompanying set of support relations, \u2206 L (\u2282 \u2206) is constructed. A legitimate set of alignments H L , \u2206 L can be enlarged only by adding h i to H L with either the support relation \u21d2 or R = \u21d2 added to \u2206 L . These assume competing alignments among the child phrases, thus cannot co-exist in the same legitimate set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h i can be supported by more than one pair of descendant alignments in", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2206 L , i.e., { h m , \u2022 } \u21d2 h i or { h m , \u2022 } R = \u21d2 h i exists. For H m = {h m }, we define the relationship \u2264 for alignments, i.e., h p \u2264 h q meaning that \u03c4 s p \u2208 ds(\u03c4 s q ) \u2227 \u03c4 t p \u2208 ds(\u03c4 t q ). For example, in Fig. 2, h m \u2264 h i and h n \u2264 h i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Theorem 3.1. There always exist the maximum pair", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h M \u2208 H m where \u2200h m \u2208 H m , h m \u2264 h M .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H L , \u2206 L should satisfy the conditions in Definition 3.2 to be legitimate as a whole. We denote", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h i * \u2212 \u2192 h j when a chain exists in \u2206 L , which con- nects h i to h j regardless of straight or inverted di- rections of intermediate supports, e.g., ( h i , \u2022 \u21d2 h i+1 ), ( h i+1 , \u2022 R = \u21d2 h i+2 ), . . ., ( h j\u22121 , \u2022 \u21d2 h j ). Note h i * \u2212 \u2192 h i is always true. Definition 3.2. H L , \u2206 L should satisfy: 1. Root-Pair Containment: \u03c4 s R , \u03c4 t R \u2208 H L 2. Same-Tree: {\u03c4 s i | \u03c4 s i , \u03c4 t i \u2208 H L }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "are subsets of phrases in the same complete parse tree of s (same for t).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Support relations are denoted using \u21d2 or", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ": \u2200h i \u2208 H L , h i * \u2212 \u2192 \u03c4 s R , \u03c4 t R \u2208 \u2206 L 4. Consistency:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relevance", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "In H L , a phrase ( = \u03c4 \u2205 ) in the source tree is aligned with at most one phrase ( = \u03c4 \u2205 ) in the target tree, and vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relevance", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "5. Monotonous: For \u03c4 s i , \u03c4 t i , \u03c4 s j , \u03c4 t j \u2208 H L , \u03c4 s i \u2208 ds(\u03c4 s j ) iff \u03c4 t i \u2208 ds(\u03c4 t j ). 6. Maximum Set: H L is the maximum legiti- mate set, in the sense that \u2200 \u03c4 s , \u03c4 t \u2208 (H \\ H L ), { \u03c4 s , \u03c4 t } \u222a H L cannot be a legitimate set with any \u2206.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relevance", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The Same-Tree condition is required to conduct an alignment on forests that consist of multiple trees in a packed representation. The Consistency condition excludes competing alignments. The Monotonous condition is a consequence of compositionality. The Maximum Set means if h m , h n \u2208 H L are in positions of a parse tree that can support h i , h i and the support relation should be added to H L , \u2206 L . Such a strict locality of compositionality is often violated in practice as discussed in Sec. 2. To tackle this issue, we add another operation to align phrases in a noncompositional way in Sec. 4.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Relevance", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The same aligned pair can have more than one support of descendant alignments because there are numerous descendant node combinations. However, the Monotonous and the Maximum Set conditions allow \u2206 L to be further restricted so that each of aligned pairs in H L has only one support.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Let us assume that alignment h i is supported by more than one pair of descendant alignments", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "1 2 3 , ,", |
|
"eq_num": ", , , , , Figure 3" |
|
} |
|
], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": ": Inside probability depends on support alignments and paths to reach an LCA.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "in \u2206 L , i.e., \u2206 L \u2287 ({ h m , h n } \u21d2 h i ) 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We denote H m = {h m } and H n = {h n }. For each h m \u2208 H m and h n \u2208 H n , we remove all support relations from \u2206 L except for the maximum pairs or the pre-terminal alignments. The resultant set \u2206 L satisfies:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Theorem 3.2. For all ( h m , h n \u21d2 h i ) \u2208 \u2206 L , \u03c4 s i = lca(\u03c4 s m , \u03c4 s n ) and \u03c4 t i = lca(\u03c4 t m , \u03c4 t n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "are true. In Fig. 2 , \u03c4 s i is the lowest common ancestor (LCA) of \u03c4 s m and \u03c4 s n , and \u03c4 t i is the LCA of \u03c4 t m and \u03c4 t n . Theorem 3.2 constitutes the basis for the dynamic programming (DP) in our phrase alignment algorithm (Sec. 4.2).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 19, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lowest Common Ancestor", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We formally model the phrase alignment process as illustrated in Fig. 3 , where h i is aligned from descendant alignments, i.e., h m and h n .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 71, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Modeling of Phrase Alignment", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Similar to the probabilistic context free grammar (PCFG), the inside probability \u03b1 i of h i is determined by the inside probabilities, \u03b1 m and \u03b1 n , of the support pairs, together with the probability of the rule, i.e., the way by which h m and h n are combined to support h i as shown in Fig. 3 . It is characterized by four paths, \u03c0 s m,i (the path from", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 295, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03c4 s m to \u03c4 s i ), \u03c0 s n,i (\u03c4 s n to \u03c4 s i ), \u03c0 t m,i (\u03c4 t m to \u03c4 t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", and \u03c0 t n,i (\u03c4 t n to \u03c4 t i ). Each path consists of a set of null-aligned phrases \u03c6 \u2208 \u03c6, \u03c4 \u2205 and their mothers, e.g., the path \u03c0 s m,i in Fig. 3 is a set of", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 147, |
|
"text": "Fig. 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03c6 s 1 , m(\u03c6 s 1 ) , \u03c6 s 2 , m(\u03c6 s 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", and \u03c6 s 3 , m(\u03c6 s 3 ) . We assume that each occurrence of a null-alignment is indepen-1 \u21d2 and R = \u21d2 are not distinguished here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "3 4 5 1 , 3 , 4 1 = 1 , 1 2 = 2 , 2 { , , } 2 \u2032 , 6 , 7 1 \u2032 , 5 , 3 2 , \ufffd,\ufffd 3 , \ufffd,\ufffd 6 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Figure 4: Alignment pairs and packed supports dent. Thus, its probability \u03b2 s m,i is computed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03b2 s m,i = \u03a0 \u03c6 s k \u2208\u03c0 s m,i P r (\u03c6 s k , \u03c4 \u2205 ). \u03b2 s n,i , \u03b2 t m,i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": ", and \u03b2 t n,i are computed in the same manner. We abbreviate \u03b3 s m,n,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "i = \u03b2 s m,i \u03b2 s n,i , likewise \u03b3 t m,n,i = \u03b2 t m,i \u03b2 t n,i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Finally, \u03b1 i can be represented as a simple relation:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u03b1 i = \u03b1 m \u03b1 n P r (\u03c4 s i , \u03c4 t i )\u03b3 s m,n,i \u03b3 t m,n,i . (1) P r (\u2022, \u2022)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "is the alignment probability parameterized in Sec. 5. Since we assume that the structures of parse trees of s and t are determined by a parser, the values of \u03b3 s m,n,i and \u03b3 t m,n,i are fixed. Therefore, by traversing the parse tree in a bottomup manner, we can identify an LCA (i.e., \u03c4 i ) for phrases \u03c4 m and \u03c4 n while simultaneously computing \u03b3 m,n,i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilistic Model", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Algorithm 4.1 depicts our algorithm. Given word alignments W = { w s i , w t i }, it constructs legitimate sets of aligned pairs in a bottom-up manner. Like the CKY algorithm, Algorithm 4.1 uses DP to efficiently compute all possible legitimate sets and their probabilities in parallel. In addition, null-alignments are allowed when aligning an LCA supported by aligned descendant nodes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A[\u2022] is indexed by phrases in the parse tree of s and maintains a list of all possible aligned pairs. Furthermore, to deal with non-monotonic alignment (Sec. 4.3), it keeps all competing hypotheses of support relations using packed representations. Specifically, h i is accompanied by its packed support list as illustrated in Fig. 4 ;", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 327, |
|
"end": 333, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "h 1 = \u03c4 s 1 , \u03c4 t 1 is aligned with supports of { \u03b1 j , h m , h n } like \u03b1 1 , h 3 , h 4 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Depending on the support alignments, h i has different inside probabilities, i.e., \u03b1 1 , \u03b1 2 , and \u03b1 3 . Since the succeeding process of alignment only deals with the LCA's of \u03c4 s 1 and \u03c4 t 1 that are independent of the support alignment, all for all w s , w t \u2208 W do 4: Find \u03c4 s and \u03c4 t covering w s and w t 5: Compute \u03b1 i of \u03c4 s , \u03c4 t using Eq. 1 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "6: PACK( \u03c4 s , \u03c4 t , \u03b1 i , \u2205 , A) 7: for all \u03c4 s m , \u03c4 s n do", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "for all \u03c4 s i , \u03b3 s m,n,i \u2208 Lca s [\u03c4 s m ][\u03c4 s n ] do 9: ALIGN(\u03c4 s m , \u03c4 s n , \u03c4 s i , \u03b3 s m,n,i , A) 10: function ALIGN(\u03c4 s m , \u03c4 s n , \u03c4 s i , \u03b3 s , A) 11: for all h m = \u03c4 s m , \u03c4 t m \u2208 A[\u03c4 s m ] do", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "for all h n = \u03c4 s n , \u03c4 t n \u2208 A[\u03c4 s n ] do 13: \u03c4 t i , \u03b3 t \u2190 Lca t [\u03c4 t m ][\u03c4 t n ]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Compute \u03b1 i using Eq. 115:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "PACK( \u03c4 s i , \u03c4 t i , \u03b1 i , h m , h n , A) 16: function PACK( \u03c4 s , \u03c4 t , \u03b1, h m , h n , A) 17: if \u03c4 s , \u03c4 t \u2208 A[\u03c4 s ] then 18: A[\u03c4 s ] \u2190 A[\u03c4 s ] \u222a \u03b1, h m , h", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "n Merge supports and their inside probability 19: else 20:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A[\u03c4 s ] \u2190 ( \u03c4 s , \u03c4 t , \u03b1, h m , h n )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "support relations are packed as a support list 2 by the PACK function.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Alignment Algorithm", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "A monotonic alignment requires \u03c4 t m \u2208 h m and \u03c4 t n \u2208 h n to have an LCA, which adheres to the compositionality in language. However, previous studies declared that the compositionality is violated in a monolingual phrase alignment (Burkett et al., 2010; Weese et al., 2014) . Heilman and Smith (2010) discuss complex phrase reordering is prevalent in paraphrases and entailed text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 233, |
|
"end": 255, |
|
"text": "(Burkett et al., 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 256, |
|
"end": 275, |
|
"text": "Weese et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 278, |
|
"end": 302, |
|
"text": "Heilman and Smith (2010)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "A non-monotonic alignment occurs when corresponding phrases have largely different orders, i.e., one of them (e.g., \u03c4 t m ) is an ancestor of another (e.g., \u03c4 t n ) or the same phrase. Such a case could be exceptionally compatible, when \u03c4 t m has nullalignments and all the aligned phrases of \u03c4 t n fit in these null-alignments. A new alignment \u03c4 s i , \u03c4 t i (= \u03c4 t m ) would be non-monotonically formed. Fig. 5 shows a real example of non-compositional alignment produced by our method. The target phrase \u03c4 t n (\"through the spirit of teamwork\") is null-", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 405, |
|
"end": 411, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Algorithm 4.2 Non-Compositional Alignment 1: function TRACE(\u03c4 n , \u03c4 m ) \u03c4 n \u2208 ds(\u03c4 m ) 2: V \u2190 \u2205 3: for all [\u03c4 m ] i do 4: if \u03c4 n \u2208 ds(\u03c6) for \u2203\u03c6 \u2208 \u03a6 [\u03c4m] i then 5: V \u2190 V \u222a \u03a8 [\u03c4m] i \u222a \u03c4 n , (\u03a6 [\u03c4m] i \\ \u03c6)\u222a GAP(\u03c4 n , \u03c6) 6: else if \u03c4 n \u2208 ds(\u03c8) for \u2203\u03c8 \u2208 \u03a8 [\u03c4m] i then 7: V \u2190 V \u222a TRACE(\u03c4 n , \u03c8) 8: else 9: for all [\u03c4 n ] j do 10: V \u2190 V \u222a DOWN([\u03c4 n ] j , [\u03c4 m ] i ) 11: return V ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "alignment when aligning \u03c4 s m and \u03c4 t m , but then the alignment to \u03c4 s n (\"Relying on team spirit\") is allowed by non-compositional alignment of \u03c4 s i . Unlike monotonous alignment, we have to verify whether the internal structures of \u03c4 t m and \u03c4 t n are compatible. Since the internal structures of \u03c4 t m and \u03c4 t n depend on their supporting alignments, their packed representations in A have to be unpacked, and each pair of supporting alignments for h m and h n must be checked to confirm compatibility. Furthermore, since the aligned phrases inside \u03c4 t m and \u03c4 t n have their own null-alignments, we need to unpack deeper supporting alignments as well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Algorithm 4.2 checks if target phrases \u03c4 m and \u03c4 n \u2208 ds(\u03c4 m ) are compatible. We use the following notations: [\u03c4 m ] i and [\u03c4 n ] j represent the phrases of \u03c4 m and \u03c4 n with the i-th and j-th sets of supporting alignments, respectively. For \u03c4 t 2 in Fig. 4 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 256, |
|
"text": "Fig. 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u03a6 [\u03c4m] i = {\u03c6 [\u03c4m] i l } ([\u03c4 n ] j is similar).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For each [\u03c4 m ] i , if \u03c4 n fits in its null-alignment like in Fig. 5 , the alignment information is updated at line 5, where GAP function takes two phrases and returns a set of null-alignments on a path between them. If \u03c4 n is a descendant of a support of \u03c4 m , the compatibility is recursively checked (line 7). Otherwise, the compatibility of the supports of \u03c4 n and \u03c4 m are recursively checked in DOWN function in a similar manner (line 10).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 68, |
|
"text": "Fig. 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "When TRACE function returns a set of { \u03a8 k , \u03a6 k }, all \u03c8 \u2208 \u03a8 k are aligned with phrases in the source and their inside probabilities are stored in A. Thus we can compute the inside probability for each \u03a8 k , \u03a6 k , which is stored in A to-Source: Relying on team spirit, expedition members defeated difficulties ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Non-Compositional Alignment", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Although we have discussed using trees for clarity, the alignment is conducted on forests. The alignment process is basically the same. The only difference is that the same pair has multiple LCAs. Hence, we need to verify if the sub-trees can be on the same tree when identifying their LCAs since multiple nodes may cover the same span with different derivations. This is critical for noncompositional alignment because whether the internal structures are on the same tree must be confirmed while unpacking them. Our alignment process corresponds to reranking of forests and may derive a different tree from the 1-best, which may resolve ambiguity in parsing. We use a parser trained beforehand because joint parsing and alignment is computationally too expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Forest Alignment", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Next, we parameterize the alignment probability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Parameterization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We apply the feature-enhanced EM (Berg-Kirkpatrick et al., 2010) due to its ability to use dependent features without an irrational independence assumption. This is preferable because the attributes of phrases largely depend on each other.", |
|
"cite_spans": [ |
|
{ |
|
"start": 33, |
|
"end": 64, |
|
"text": "(Berg-Kirkpatrick et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-enhanced EM Algorithm", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Our method is computationally heavy since it handles forests and involves unpacking in the noncompositional alignment process. Thus, we use Viterbi training (Brown et al., 1993) together with a beam search of size \u00b5 b \u2208 N on the featureenhanced EM. Also, mini-batch training (Liang and Klein, 2009) is applied. Such an approximation for efficiency is common in parallel parsing (Burkett and Klein, 2008; Burkett et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 177, |
|
"text": "(Brown et al., 1993)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 298, |
|
"text": "(Liang and Klein, 2009)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 403, |
|
"text": "(Burkett and Klein, 2008;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 425, |
|
"text": "Burkett et al., 2010)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-enhanced EM Algorithm", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In addition, an alignment supported by distant descendants tends to fail to reach a root-pair alignment. Thus, we restrict the generation gap between a support alignment and its LCA to be less than or equal to \u00b5 g \u2208 N.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature-enhanced EM Algorithm", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "In feature-enhanced EM, the alignment probability in Eq. (1) is parameterized using features:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "P r (\u03c4 s i , \u03c4 t i ) . = exp(w \u2022 F(a s i , a t i )) \u03c4 s j ,\u03c4 t j ,\u03c4 s i =\u03c4 s j exp(w \u2022 F(a s j , a t j ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": ", In a parse tree, the head of a phrase determines its property. Hence, a lemmatized lexical head a lex \u2208 a combined with its syntactic category a cat \u2208 a is encoded as a feature 3 as shown below. We use semantic (instead of syntactic) heads to encode semantic relationships in paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "1: 1(a s lex = \u2022, a s cat = \u2022, a t lex = \u2022, a t cat = \u2022) 2: 1(SurfaceSim(a s lex = \u2022, a t lex = \u2022)) 3: 1(WordnetSim(a s lex = \u2022, a t lex = \u2022)) 4: 1(EmbeddingSim(a s lex = \u2022, a t lex = \u2022)) 5: 1(IsPrepositionPair(a s lex = \u2022, a t lex = \u2022)) 6: 1(a s cat = \u2022, a t cat = \u2022) 7: 1(IsSameCategory(a s cat = \u2022, a t cat = \u2022))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The first feature is an indicator invoked only at specific values. On the other hand, the rest of the features are invoked across multiple values, allowing general patterns to be learned. The second feature is invoked if two heads are identical or a head is a substring of another. The third feature is invoked if two heads are synonyms or derivations that are extracted from the WordNet 4 . The fourth feature is invoked if the cosine similarity between word embeddings of two heads is larger than a threshold. The fifth feature is invoked when the heads are both prepositions to capture their different natures from the content words. The last two features are for categories; the sixth one is invoked at each category pair, while the seventh feature is invoked if the input categories are the same.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "To avoid generating a huge number of features, we reduce the number of syntactic categories; for contents (N, V, ADJ, and ADV), prepositions, coordinations, null (i.e., for \u03c4 \u2205 ), and others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Since our method allows null-alignments, it has a degenerate maximum likelihood solution (Liang and Klein, 2009) that makes every phrase nullalignment. Similarly, a degenerate solution overly conducts non-compositional alignment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 112, |
|
"text": "(Liang and Klein, 2009)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Penalty Function", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "To avoid these issues, a penalty is incorporated:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Penalty Function", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "P e (\u03c4 s i , \u03c4 t i ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 exp{\u2212(|\u03c4 s i | \u03c6 + |\u03c4 t i | \u03c6 + \u00b5 c + 1) \u00b5n } (non-compositional alignment) exp{\u2212(|\u03c4 s i | \u03c6 + |\u03c4 t i | \u03c6 + 1) \u00b5n } (otherwise)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Penalty Function", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "where | \u2022 | \u03c6 computes the span of internal nullalignments, and \u00b5 n \u2265 1.0 and \u00b5 c \u2208 R + control the strength of the penalties of the nullalignment and the non-compositional alignment, respectively. The penalty function is multiplied by Eq.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Penalty Function", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "(1) as a soft-constraint for re-ranking alignment pairs in Algorithm 4.1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Penalty Function", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Following the spirit of parallel parsing that simultaneously parses and aligns sentences, we linearly interpolate the alignment probability with the parsing probability once the parameters are tuned by EM. When aligning a node pair \u03c4 s i , \u03c4 t i , the overall probability is computed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination with Parse Probability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "(1 \u2212 \u00b5 p )\u03b1 i + \u00b5 p (\u03c4 s i ) (\u03c4 t i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination with Parse Probability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": ", where (\u2022) gives the marginal probability in parsing and \u00b5 p \u2208 [0, 1] balances these probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Combination with Parse Probability", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "As discussed in Sec. 2, previous studies have not conducted syntactic phrase alignment on parse trees. A direct metric does not exist to compare paraphrases that cover different spans, i.e., our syntactic paraphrases and paraphrases of n-grams. Thus, we compared the alignment quality to that of humans as a realistic way to evaluate the performance of our method.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also evaluated the parsing quality. Similar to the alignment quality, differences in phrase structures disturb the comparisons (Sagae et al., 2008) . Our method applies an HPSG parser Enju to derive parse forests due to its state-of-the-art performance and ability to provide rich properties of phrases. Hence, we compared our parsing quality to the 1-best parses of Enju.", |
|
"cite_spans": [ |
|
{ |
|
"start": 130, |
|
"end": 150, |
|
"text": "(Sagae et al., 2008)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We used reference translations to evaluate machine translations 5 as sentential paraphrases (Weese et al., 2014) . The reference translations of 10 to 30 words were extracted and paired, giving 41K pairs as a training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 112, |
|
"text": "(Weese et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Resources", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We use different kinds of dictionaries to obtain word alignments W as well as to compute feature functions. First, we extract synonyms and words with derivational relationship using Word-Net. Then we handcraft derivation rules (e.g., create, creation, creator) and extract potentially derivational words from the training corpus. Finally, we use prepositions defined in (Srikumar and Roth, 2013) as a preposition dictionary to compute the feature function.", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 395, |
|
"text": "(Srikumar and Roth, 2013)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Resources", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "In addition, we extend W using word embeddings; we use the MVLSA word embeddings (Rastogi et al., 2015) given the superior performance in word similarity tasks. Specifically, we compute the cosine similarity of embeddings; words with a higher similarity value than a threshold are determined as similar words. The threshold is empirically set as the 100th highest similarity value between words in the training corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 81, |
|
"end": 103, |
|
"text": "(Rastogi et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Language Resources", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Since no annotated corpus provides phrase alignments on parse trees, we created one through twophase manual annotation. First, a linguistic expert with rich experience on annotating HPSG trees annotated gold-trees to paraphrasal sentence pairs sampled from the training corpus. To diversify the data, only one reference pair per sentence of a source language was annotated. Consequently, 201 paraphrased pairs with gold-trees (containing 20, 678 phrases) were obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold-Standard Data", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Next, three professional English translators identified paraphrased pairs including nullalignments given sets of phrases extracted from the gold-trees. These annotators independently annotated the same set, yielding 14, 356 phrase alignments where at least one annotator regarded as a paraphrase. All the annotators agreed that 77% of the phrases were paraphrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold-Standard Data", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We used 50 sentence pairs for development and another 151 for testing. These pairs were excluded from the training corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Gold-Standard Data", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Alignment Quality Alignment quality was evaluated by measuring the extent that the automatic alignment results agree with those of humans. Specifically, we evaluated how goldalignments can be replicated by automatic alignment (called recall) and how automatic alignments overlap with alignments that at least an annotator aligned (called precision) as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Recall = |{h|h \u2208 H a \u2227 h \u2208 G \u2229 G }| |G \u2229 G | , Precision = |{h|h \u2208 Ha \u2227 h \u2208 G \u222a G }| |Ha| ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "where Ha is a set of alignments, while G and G are the ones that two of annotators produce, respectively. The function of | \u2022 | counts the elements in a set. There are three combinations for G and G because we had three annotators. The final precision and recall values are their averages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Parsing Quality The parsing quality was evaluated using the CONLL-X (Buchholz and Marsi, 2006) standard. Dependencies were extracted from the output HPSG trees, and evaluated using the official script 6 . Due to this conversion, the accuracy on the relation labels is less important. Thus, we reported only the unlabeled attachment score (UAS) 7 . The development and test sets provide 2, 371 and 6, 957 dependencies, respectively.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 94, |
|
"text": "(Buchholz and Marsi, 2006)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Roles of hyper-parameters \u00b5 n Control penalty for null-alignment \u00b5 c Control penalty for non-compositional alignment \u00b5 p Balance alignment and parsing prob. \u00b5 b Beam size at alignment \u00b5 g Generation gap to reach an LCA Since all metrics were computed in a set, the approximate randomization (Noreen, 1989; Riezler and Maxwell, 2005 ) (B = 10K) was used for significance testing. It has been shown to be more conservative than using bootstrap resampling (Riezler and Maxwell, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 291, |
|
"end": 305, |
|
"text": "(Noreen, 1989;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 306, |
|
"end": 331, |
|
"text": "Riezler and Maxwell, 2005", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 480, |
|
"text": "(Riezler and Maxwell, 2005)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metric", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Overall Results Table 2 summarizes the hyperparameters, which were tuned to maximize UAS in the development set using the Bayesian optimization. For efficiency, we used 2K samples from the training corpus and set the mini-batch size in feature-enhanced EM to 200 similar to \"rapid training\" in (Burkett and Klein, 2008) . We also set \u00b5 b = 50 during EM training to manage the training time. Table 3 shows the performance on the test set for variations of our method and that of the human annotators. The last column shows the percentage of pairs where a root pair is reached to be aligned, called reachability. Our method is denoted as Proposed, while its variations include a method with only monotonic alignment (monotonic), without EM (w/o EM), and a method aligning only 1-best trees (1-best tree).", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 319, |
|
"text": "(Burkett and Klein, 2008)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 23, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 391, |
|
"end": 398, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "The performance of the human annotators was assessed by considering one annotator as the test and the other two as the gold-standard, and then taking the averages, which is the same setting as our method. We regard this as the pseudo inter-annotator agreement, since the conventional interannotator agreement is not directly applicable due to variations in aligned phrases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Our method significantly outperforms the others as it achieved the highest recall and precision for alignment quality. Our recall and precision reach 92% and 89% of those of humans, respectively. Non-compositional alignment is shown to contribute to alignment quality, while the featureenhanced EM is effective for both the alignment and parsing quality. Comparing our method and the one aligning only 1-best trees demonstrates that the alignment of parse forests largely contributes to the alignment quality. Although we confirmed that aligning larger forests slightly improved recall and precision, the improvements were statistically insignificant. The parsing quality was not much affected by phrase alignment, which is further investigated in the following.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "Finally, our method achieved 98% reachability, where 2% of unreachable cases were due to the beam search. While understanding that the reachability depends on experimental data, ours is notably higher than that of SCFG, reported as 9.1% in (Weese et al., 2014) . These results show the ability of our method to accurately align paraphrases with divergent phrase correspondences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 260, |
|
"text": "(Weese et al., 2014)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Discussion", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "We investigated the effect of the mini-batch size in EM training using the entire training corpus (41K pairs). When increasing the mini-batch size from 200 to 2K, recall, precision, and UAS values are fairly stable. In addition, they are insensitive against the amount of training corpus, showing the comparable values against the model trained on 2K samples. These results demonstrate that our method can be trained with a moderate amount of data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Mini-Batch Size", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Observations Previous studies show that parallel parsing improves parsing quality, while such an effect is insignificant here. We examine causes through manual observations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Mini-Batch Size", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The evaluation script indicated that our method corrected 34 errors while introducing 41 new errors 8 . We further analyzed these 75 cases; 12 cases are ambiguous as both the gold-standard and the output are correct. In addition, 8 cases are due to erroneous original sentences that should be disregarded, e.g., \" For two weeks ago,...\" and \"Accord-ing to the source, will also meet...\". Consequently, our method corrected 32 errors while introducing 23 errors in reality for 446 errors in 1-best trees, which achieves a 2.5% error reduction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Effect of Mini-Batch Size", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These are promising results for our method to improve parsing quality, especially on the PPattachment (159 errors in 1-best), which contained 14 of the 32 corrected errors. Fig. 1 shows a real example; the phrase of \"for a smoke\" in the source was mistakenly attached to \"ground floor\" in the 1-best tree. This error was corrected as depicted. Duan et al. (2016) showed that paraphrases artificially generated using n-best parses improved the parsing quality. One reason for limited improvement in our experiments may be because structural changes in our natural paraphrases are more dynamic than the level useful to resolve ambiguities. We will further investigate this in future.", |
|
"cite_spans": [ |
|
{ |
|
"start": 344, |
|
"end": 362, |
|
"text": "Duan et al. (2016)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 173, |
|
"end": 179, |
|
"text": "Fig. 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Effect of Mini-Batch Size", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We propose an efficient method for phrase alignment on parse forests of paraphrased sentences. To increase the amount of collected paraphrases, we plan to extend our method to align comparable paraphrases that are partially paraphrasal sentences. In addition, we will apply our method to parallel parsing and other grammar, e.g., projective dependency trees. Furthermore, we will apply such syntactic paraphrases to phrase embedding.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "This is true except for a non-compositional alignment where the packed representation must be unpacked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We also tried features based on the configurations of the source and target sub-trees similar to(Das and Smith, 2009) as well as features based on the spans of null-alignments. However, none of them contributed to alignment quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://wordnet.princeton.edu", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NIST OpenMT corpora: LDC2010T14, LDC2010T17, LDC2010T21, LDC2010T23, LDC2013T03", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://ilk.uvt.nl/conll/software.html 7 Although omitted, the labeled attachment score showed the same tendency as UAS.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Alignments were obtained by the model trained using the entire corpus with the 1K mini-batch size.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www-bigdata.ist.osaka-u.ac.jp/ arase/pj/phrase-alignment/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Professor Issei Sato for permission to use their package for Bayesian optimization. Special thanks also to Dr. Yuka Tateishi for her contribution to HPSG tree annotation. Advice and comments given by Professor Takuya Matsuzaki and Professor Yusuke Miyao have been a great help in applying Enju parser for this project. We appreciate the anonymous reviewers for their insightful comments and suggestions to improve the paper. This project is funded by Microsoft Research Asia and the Kayamori Foundation of Informational Science Advancement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The supplemental material is available at our web site 9 that provides proofs of the theorems, pseudocodes of the algorithms, and more experiment results with examples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supplemental Material", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Painless unsupervised learning with features", |
|
"authors": [ |
|
{ |
|
"first": "Taylor", |
|
"middle": [], |
|
"last": "Berg-Kirkpatrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Bouchard-C\u00f4t\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Denero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "582--590", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, John DeNero, and Dan Klein. 2010. Painless un- supervised learning with features. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT), pages 582-590, Los Angeles, California.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Mining paraphrasal typed templates from a plain text corpus", |
|
"authors": [ |
|
{ |
|
"first": "Or", |
|
"middle": [], |
|
"last": "Biran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Terra", |
|
"middle": [], |
|
"last": "Blevins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1913--1923", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Or Biran, Terra Blevins, and Kathleen McKeown. 2016. Mining paraphrasal typed templates from a plain text corpus. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL), pages 1913-1923, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The mathematics of statistical machine translation: Parameter estimation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Peter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [ |
|
"Della" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Robert", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Della Pietra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mercer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Computational Linguistics", |
|
"volume": "19", |
|
"issue": "2", |
|
"pages": "263--311", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263- 311.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "CoNLL-X shared task on multilingual dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Sabine", |
|
"middle": [], |
|
"last": "Buchholz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erwin", |
|
"middle": [], |
|
"last": "Marsi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceesings of the Conference on Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "149--164", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceesings of the Conference on Natural Language Learning (CoNLL), pages 149-164, New York City.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Joint parsing and alignment with weakly synchronized grammars", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Burkett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Blitzer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "127--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Burkett, John Blitzer, and Dan Klein. 2010. Joint parsing and alignment with weakly synchro- nized grammars. In Proceedings of the Annual Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (NAACL-HLT), pages 127-135, Los Angeles, California.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Two languages are better than one (for syntactic parsing)", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Burkett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "877--886", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Pro- ceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 877-886, Honolulu, Hawaii.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A formal characterization of parsing word alignments by synchronous grammars with empirical evidence to the ITG hypothesis", |
|
"authors": [], |
|
"year": 2013, |
|
"venue": "Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "58--67", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gideon Maillette de Buy Wenniger and Khalil Sima'an. 2013. A formal characterization of parsing word alignments by synchronous grammars with empiri- cal evidence to the ITG hypothesis. In Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST), pages 58-67, At- lanta, Georgia.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Parsing paraphrases with joint inference", |
|
"authors": [ |
|
{ |
|
"first": "Kook", |
|
"middle": [], |
|
"last": "Do", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Choe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mcclosky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1223--1233", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Do Kook Choe and David McClosky. 2015. Pars- ing paraphrases with joint inference. In Proceed- ings of the Joint Conference of the Annual Meet- ing of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 1223- 1233, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Programming Languages and Their Compilers: Preliminary Notes", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Cocke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1969, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Cocke. 1969. Programming Languages and Their Compilers: Preliminary Notes. Courant Institute of Mathematical Sciences, New York University.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Constructing corpora for the development and evaluation of paraphrase systems", |
|
"authors": [ |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Cohn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "4", |
|
"pages": "597--614", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Trevor Cohn, Chris Callison-Burch, and Mirella La- pata. 2008. Constructing corpora for the develop- ment and evaluation of paraphrase systems. Com- putational Linguistics, 34(4):597-614.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Paraphrase identification as probabilistic quasi-synchronous recognition", |
|
"authors": [ |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (ACL-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "468--476", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dipanjan Das and Noah A. Smith. 2009. Paraphrase identification as probabilistic quasi-synchronous recognition. In Proceedings of the Joint Conference of the Annual Meeting of the Association for Com- putational Linguistics and the International Joint Conference on Natural Language Processing (ACL- IJCNLP), pages 468-476, Suntec, Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Quirk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Brockett", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceesings of the International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "350--356", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceesings of the International Conference on Com- putational Linguistics (COLING), pages 350-356, Geneva, Switzerland.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Generating disambiguating paraphrases for structurally ambiguous sentences", |
|
"authors": [ |
|
{ |
|
"first": "Manjuan", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Linguistic Annotation Workshop (LAW)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manjuan Duan, Ethan Hill, and Michael White. 2016. Generating disambiguating paraphrases for struc- turally ambiguous sentences. In Proceedings of the Linguistic Annotation Workshop (LAW), pages 160- 170, Berlin, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "PPDB: The paraphrase database", |
|
"authors": [ |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "758--764", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the Annual Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies (NAACL-HLT), pages 758-764, Atlanta, Georgia.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Noah", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1011--1019", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, para- phrases, and answers to questions. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies (NAACL- HLT), pages 1011-1019, Los Angeles, California.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Tadao Kasami. 1965. An efficient recognition and syntax-analysis algorithm for context-free languages", |
|
"authors": [ |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Kaeshammer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "68--77", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Miriam Kaeshammer. 2013. Synchronous linear context-free rewriting systems for machine transla- tion. In Proceedings of the Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST), pages 68-77, Atlanta, Georgia. Tadao Kasami. 1965. An efficient recognition and syntax-analysis algorithm for context-free lan- guages. Scientific report AFCRL-65-758, Air Force Cambridge Research Lab.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Online EM for unsupervised models", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "611--619", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang and Dan Klein. 2009. Online EM for un- supervised models. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 611- 619, Boulder, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "A phrase-based alignment model for natural language inference", |
|
"authors": [ |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michel", |
|
"middle": [], |
|
"last": "Galley", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "802--811", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill MacCartney, Michel Galley, and Christopher D. Manning. 2008. A phrase-based alignment model for natural language inference. In Proceesings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 802-811, Hon- olulu, Hawaii.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Feature forest models for probabilistic HPSG parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Computational Linguistics", |
|
"volume": "34", |
|
"issue": "1", |
|
"pages": "35--80", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2008. Feature for- est models for probabilistic HPSG parsing. Compu- tational Linguistics, 34(1):35-80.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Computer-Intensive Methods for Testing Hypotheses: An Introduction", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Noreen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Multiview LSA: Representation learning via generalized CCA", |
|
"authors": [ |
|
{ |
|
"first": "Pushpendre", |
|
"middle": [], |
|
"last": "Rastogi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raman", |
|
"middle": [], |
|
"last": "Arora", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "556--566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pushpendre Rastogi, Benjamin Van Durme, and Raman Arora. 2015. Multiview LSA: Representation learn- ing via generalized CCA. In Proceedings of the An- nual Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL-HLT), pages 556-566, Denver, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "On some pitfalls in automatic evaluation and significance testing for MT", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Riezler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Maxwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "57--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefan Riezler and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance test- ing for MT. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 57-64, Ann Arbor, Michigan.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Challenges in mapping of syntactic representations for frameworkindependent parser evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Kenji", |
|
"middle": [], |
|
"last": "Sagae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Workshop on Automated Syntatic Annotations for Interoperable Language Resources", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenji Sagae, Yusuke Miyao, Takuya Matsuzaki, and Jun'ichi Tsujii. 2008. Challenges in map- ping of syntactic representations for framework- independent parser evaluation. In Proceedings of the Workshop on Automated Syntatic Annotations for In- teroperable Language Resources.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Eisner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Workshop on Statistical Machine Translation (WMT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Smith and Jason Eisner. 2006. Quasi- synchronous grammars: Alignment by soft projec- tion of syntactic dependencies. In Proceedings of the Workshop on Statistical Machine Translation (WMT), pages 23-30, New York City.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Bilingual parsing with factored estimation: Using English to parse Korean", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David A. Smith and Noah A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Korean. In Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 49-56, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Recursive deep models for semantic compositionality over a sentiment treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1631--1642", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceesings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 1631-1642, Seattle, Washington, USA.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Modeling semantic relations expressed by prepositions", |
|
"authors": [ |
|
{ |
|
"first": "Vivek", |
|
"middle": [], |
|
"last": "Srikumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Transactions of the Association of Computational Linguistics (TACL)", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "231--242", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vivek Srikumar and Dan Roth. 2013. Modeling se- mantic relations expressed by prepositions. Trans- actions of the Association of Computational Linguis- tics (TACL), 1:231-242.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Improved semantic representations from tree-structured long short-term memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Kai Sheng", |
|
"middle": [], |
|
"last": "Tai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1556--1566", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. pages 1556-1566, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "A joint phrasal and dependency model for paraphrase alignment", |
|
"authors": [ |
|
{ |
|
"first": "Kapil", |
|
"middle": [], |
|
"last": "Thadani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Scott", |
|
"middle": [], |
|
"last": "Martin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceesings of the International Conference on Computational Linguistics (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1229--1238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kapil Thadani, Scott Martin, and Michael White. 2012. A joint phrasal and dependency model for para- phrase alignment. In Proceesings of the Inter- national Conference on Computational Linguistics (COLING), pages 1229-1238, Mumbai, India.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "PARADIGM: Paraphrase diagnostics through grammar matching", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Weese", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juri", |
|
"middle": [], |
|
"last": "Ganitkevitch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--201", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan Weese, Juri Ganitkevitch, and Chris Callison- Burch. 2014. PARADIGM: Paraphrase diagnostics through grammar matching. In Proceedings of the Conference of the European Chapter of the Associ- ation for Computational Linguistics (EACL), pages 192-201, Gothenburg, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "From paraphrase database to compositional paraphrase model and back", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Transactions of the Association of Computational Linguistics (TACL)", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "345--358", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to com- positional paraphrase model and back. Transac- tions of the Association of Computational Linguis- tics (TACL), 3(1):345-358.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Towards universal paraphrastic sentence embeddings", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Wieting", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceesings of the International Conference on Learning Representations (ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In Proceesings of the Inter- national Conference on Learning Representations (ICLR).", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "Dekai", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Computational Linguistics", |
|
"volume": "23", |
|
"issue": "3", |
|
"pages": "377--403", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Semi-Markov phrasebased monolingual alignment", |
|
"authors": [ |
|
{ |
|
"first": "Xuchen", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Van Durme", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceesings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "590--600", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison- Burch, and Peter Clark. 2013. Semi-Markov phrase- based monolingual alignment. In Proceesings of the Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 590-600, Seat- tle, Washington, USA.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Discriminative phrase embedding for paraphrase identification", |
|
"authors": [ |
|
{ |
|
"first": "Wenpeng", |
|
"middle": [], |
|
"last": "Yin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1368--1373", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenpeng Yin and Hinrich Sch\u00fctze. 2015. Discrimi- native phrase embedding for paraphrase identifica- tion. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT), pages 1368-1373, Denver, Colorado.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Recognition and parsing of context-free languages in time n 3", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Daniel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Younger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "Information and Control", |
|
"volume": "10", |
|
"issue": "2", |
|
"pages": "189--208", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel H. Younger. 1967. Recognition and parsing of context-free languages in time n 3 . Information and Control, 10(2):189-208.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "Example of phrase alignments", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "t i is supported by alignments of their descendant phrases when Alignment pair and its supports", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": ". Target: Members of the scientific team overcame difficulties through the spirit of teamwork.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "0 , \u2022 \u2022 \u2022 , a n ) consists of n attributes of \u03c4 . F(\u2022, \u2022) and w are vectors of feature functions and their weights, respectively.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "Algorithm 4.1 Phrase Alignment 1: LCAs and \u03b3 in parse trees of s and t are computed and stored in Lca s [\u2022][\u2022] and Lca t [\u2022][\u2022]. 2: set A[\u03c4 s ] \u2190 \u2205 for all \u03c4 s 3:", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": ", h 7 . [\u03c4 m ] i consists of sets of aligned target phrases \u03a8 [\u03c4m] i = {\u03c8", |
|
"content": "<table><tr><td/><td>, there are</td></tr><tr><td colspan=\"2\">[\u03c4 t 2 ] 1 supported by h 5 , h 3 and [\u03c4 t 2 ] 2 supported by</td></tr><tr><td>h 6 [\u03c4m] i k</td><td>} and null-alignments</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Summary of the hyper-parameters 75.09 * 92.91 * 86 1-best tree 80.11 * 73.26 * 93.56 100", |
|
"content": "<table><tr><td>Method</td><td>Recall</td><td>Prec.</td><td>UAS</td><td>%</td></tr><tr><td>Human</td><td>90.65</td><td>88.21</td><td>-</td><td>-</td></tr><tr><td>Proposed</td><td colspan=\"3\">83.64 78.91 93.49</td><td>98</td></tr><tr><td colspan=\"4\">Monotonic 82.86 * 77.97 * 93.49</td><td>98</td></tr><tr><td>w/o EM</td><td>81.33</td><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Evaluation results on the test set, where * represents p-value < 0.05 against our method.", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |