Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D14-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:55:55.924527Z"
},
"title": "Learning to Differentiate Better from Worse Translations",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
},
{
"first": "Massimo",
"middle": [],
"last": "Nicosia",
"suffix": "",
"affiliation": {
"laboratory": "ALT Research Group",
"institution": "Qatar Computing Research Institute",
"location": {
"country": "Qatar Foundation"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a pairwise learning-to-rank approach to machine translation evaluation that learns to differentiate better from worse translations in the context of a given reference. We integrate several layers of linguistic information encapsulated in tree-based structures, making use of both the reference and the system output simultaneously, thus bringing our ranking closer to how humans evaluate translations. Most importantly, instead of deciding upfront which types of features are important, we use the learning framework of preference re-ranking kernels to learn the features automatically. The evaluation results show that learning in the proposed framework yields better correlation with humans than computing the direct similarity over the same type of structures. Also, we show our structural kernel learning (SKL) can be a general framework for MT evaluation, in which syntactic and semantic information can be naturally incorporated.",
"pdf_parse": {
"paper_id": "D14-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a pairwise learning-to-rank approach to machine translation evaluation that learns to differentiate better from worse translations in the context of a given reference. We integrate several layers of linguistic information encapsulated in tree-based structures, making use of both the reference and the system output simultaneously, thus bringing our ranking closer to how humans evaluate translations. Most importantly, instead of deciding upfront which types of features are important, we use the learning framework of preference re-ranking kernels to learn the features automatically. The evaluation results show that learning in the proposed framework yields better correlation with humans than computing the direct similarity over the same type of structures. Also, we show our structural kernel learning (SKL) can be a general framework for MT evaluation, in which syntactic and semantic information can be naturally incorporated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We have seen in recent years fast improvement in the overall quality of machine translation (MT) systems. This was only possible because of the use of automatic metrics for MT evaluation, such as BLEU (Papineni et al., 2002) , which is the defacto standard; and more recently: TER (Snover et al., 2006) and METEOR (Lavie and Denkowski, 2009) , among other emerging MT evaluation metrics. These automatic metrics provide fast and inexpensive means to compare the output of different MT systems, without the need to ask for human judgments each time the MT system has been changed.",
"cite_spans": [
{
"start": 201,
"end": 224,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
},
{
"start": 281,
"end": 302,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 314,
"end": 341,
"text": "(Lavie and Denkowski, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a result, this has enabled rapid development in the field of statistical machine translation (SMT), by allowing to train and tune systems as well as to track progress in a way that highly correlates with human judgments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Today, MT evaluation is an active field of research, and modern metrics perform analysis at various levels, e.g., lexical (Papineni et al., 2002; Snover et al., 2006) , including synonymy and paraphrasing (Lavie and Denkowski, 2009) ; syntactic (Gim\u00e9nez and M\u00e0rquez, 2007; Popovi\u0107 and Ney, 2007; Liu and Gildea, 2005) ; semantic (Gim\u00e9nez and M\u00e0rquez, 2007; Lo et al., 2012) ; and discourse (Comelles et al., 2010; Wong and Kit, 2012; .",
"cite_spans": [
{
"start": 122,
"end": 145,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF18"
},
{
"start": 146,
"end": 166,
"text": "Snover et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 205,
"end": 232,
"text": "(Lavie and Denkowski, 2009)",
"ref_id": "BIBREF12"
},
{
"start": 245,
"end": 272,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2007;",
"ref_id": "BIBREF7"
},
{
"start": 273,
"end": 295,
"text": "Popovi\u0107 and Ney, 2007;",
"ref_id": "BIBREF19"
},
{
"start": 296,
"end": 317,
"text": "Liu and Gildea, 2005)",
"ref_id": "BIBREF13"
},
{
"start": 329,
"end": 356,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2007;",
"ref_id": "BIBREF7"
},
{
"start": 357,
"end": 373,
"text": "Lo et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 390,
"end": 413,
"text": "(Comelles et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 414,
"end": 433,
"text": "Wong and Kit, 2012;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic MT evaluation metrics compare the output of a system to one or more human references in order to produce a similarity score. The quality of such a metric is typically judged in terms of correlation of the scores it produces with scores given by human judges. As a result, some evaluation metrics have been trained to reproduce the scores assigned by humans as closely as possible (Albrecht and Hwa, 2008) . Unfortunately, humans have a hard time assigning an absolute score to a translation. Hence, direct human evaluation scores such as adequacy and fluency, which were widely used in the past, are now discontinued in favor of ranking-based evaluations, where judges are asked to rank the output of 2 to 5 systems instead. It has been shown that using such ranking-based assessments yields much higher inter-annotator agreement (Callison-Burch et al., 2007) .",
"cite_spans": [
{
"start": 390,
"end": 414,
"text": "(Albrecht and Hwa, 2008)",
"ref_id": "BIBREF0"
},
{
"start": 840,
"end": 869,
"text": "(Callison-Burch et al., 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While evaluation metrics still produce numerical scores, in part because MT evaluation shared tasks at NIST and WMT ask for it, there has also been work on a ranking formulation of the MT evaluation task for a given set of outputs. This was shown to yield higher correlation with human judgments (Duh, 2008; Song and Cohn, 2011) .",
"cite_spans": [
{
"start": 296,
"end": 307,
"text": "(Duh, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 308,
"end": 328,
"text": "Song and Cohn, 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning automatic metrics in a pairwise setting, i.e., learning to distinguish between two alternative translations and to decide which of the two is better (which is arguably one of the easiest ways to produce a ranking), emulates closely how human judges perform evaluation assessments in reality. Instead of learning a similarity function between a translation and the reference, they learn how to differentiate a better from a worse translation given a corresponding reference. While the pairwise setting does not provide an absolute quality scoring metric, it is useful for most evaluation and MT development scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a pairwise learning setting similar to that of Duh (2008) , but we extend it to a new level, both in terms of feature representation and learning framework. First, we integrate several layers of linguistic information encapsulated in tree-based structures; Duh (2008) only used lexical and POS matches as features. Second, we use information about both the reference and two alternative translations simultaneously, thus bringing our ranking closer to how humans rank translations. Finally, instead of deciding upfront which types of features between hypotheses and references are important, we use a our structural kernel learning (SKL) framework to generate and select them automatically.",
"cite_spans": [
{
"start": 73,
"end": 83,
"text": "Duh (2008)",
"ref_id": "BIBREF6"
},
{
"start": 283,
"end": 293,
"text": "Duh (2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The structural kernel learning (SKL) framework we propose consists in: (i) designing a structural representation, e.g., using syntactic and discourse trees of translation hypotheses and a references; and (ii) applying structural kernels (Moschitti, 2006; Moschitti, 2008) , to such representations in order to automatically inject structural features in the preference re-ranking algorithm. We use this method with translation-reference pairs to directly learn the features themselves, instead of learning the importance of a predetermined set of features. A similar learning framework has been proven to be effective for question answering (Moschitti et al., 2007) , and textual entailment recognition (Zanzotto and Moschitti, 2006) .",
"cite_spans": [
{
"start": 237,
"end": 254,
"text": "(Moschitti, 2006;",
"ref_id": "BIBREF16"
},
{
"start": 255,
"end": 271,
"text": "Moschitti, 2008)",
"ref_id": "BIBREF17"
},
{
"start": 641,
"end": 665,
"text": "(Moschitti et al., 2007)",
"ref_id": "BIBREF15"
},
{
"start": 717,
"end": 733,
"text": "Moschitti, 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goals are twofold: (i) in the short term, to demonstrate that structural kernel learning is suitable for this task, and can effectively learn to rank hypotheses at the segment-level; and (ii) in the long term, to show that this approach provides a unified framework that allows to integrate several layers of linguistic analysis and information and to improve over the state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Below we report the results of some initial experiments using syntactic and discourse structures. We show that learning in the proposed framework yields better correlation with humans than applying the traditional translation-reference similarity metrics using the same type of structures. We also show that the contributions of syntax and discourse information are cumulative. Finally, despite the limited information we use, we achieve correlation at the segment level that outperforms BLEU and other metrics at WMT12, e.g., our metric would have been ranked higher in terms of correlation with human judgments compared to TER, NIST, and BLEU in the WMT12 Metrics shared task (Callison-Burch et al., 2012) .",
"cite_spans": [
{
"start": 678,
"end": 707,
"text": "(Callison-Burch et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our pairwise setting, each sentence s in the source language is represented by a tuple t 1 , t 2 , r , where t 1 and t 2 are two alternative translations and r is a reference translation. Our goal is to develop a classifier of such tuples that decides whether t 1 is a better translation than t 2 given the reference r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based Learning from Linguistic Structures",
"sec_num": "2"
},
{
"text": "Engineering features for deciding whether t 1 is a better translation than t 2 is a difficult task. Thus, we rely on the automatic feature extraction enabled by the SKL framework, and our task is reduced to choosing: (i) a meaningful structural representation for t 1 , t 2 , r , and (ii) a feature function \u03c6 mt that maps such structures to substructures, i.e., our feature space. Since the design of \u03c6 mt is complex, we use tree kernels applied to two simpler structural mappings \u03c6 M (t 1 , r) and \u03c6 M (t 2 , r). The latter generate the tree representations for the translation-reference pairs (t 1 , r) and (t 2 , r). The next section shows such mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel-based Learning from Linguistic Structures",
"sec_num": "2"
},
{
"text": "To represent a translation-reference pair (t, r), we adopt shallow syntactic trees combined with RSTstyle discourse trees. Shallow trees have been successfully used for question answering (Severyn and Moschitti, 2012) and semantic textual similarity (Severyn et al., 2013b) ; while discourse information has proved useful in MT evaluation . Combined shallow syntax and discourse trees worked well for concept segmentation and labeling (Saleh et al., 2014a shows two example trees combining discourse, shallow syntax and POS: one for a translation hypothesis (top) and the other one for the reference (bottom). To build such structures, we used the Stanford POS tagger (Toutanova et al., 2003) , the Illinois chunker (Punyakanok and Roth, 2001) , and the discourse parser 1 of (Joty et al., 2012; Joty et al., 2013) .",
"cite_spans": [
{
"start": 188,
"end": 217,
"text": "(Severyn and Moschitti, 2012)",
"ref_id": "BIBREF23"
},
{
"start": 250,
"end": 273,
"text": "(Severyn et al., 2013b)",
"ref_id": "BIBREF25"
},
{
"start": 435,
"end": 455,
"text": "(Saleh et al., 2014a",
"ref_id": "BIBREF21"
},
{
"start": 668,
"end": 692,
"text": "(Toutanova et al., 2003)",
"ref_id": "BIBREF28"
},
{
"start": 716,
"end": 743,
"text": "(Punyakanok and Roth, 2001)",
"ref_id": "BIBREF20"
},
{
"start": 776,
"end": 795,
"text": "(Joty et al., 2012;",
"ref_id": "BIBREF9"
},
{
"start": 796,
"end": 814,
"text": "Joty et al., 2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "2.1"
},
{
"text": "The lexical items constitute the leaves of the tree. The words are connected to their respective POS tags, which are in turn grouped into chunks. Then, the chunks are grouped into elementary discourse units (EDU), to which the nuclearity status is attached (i.e., NUCLEUS or SATELLITE). Finally, EDUs and higher-order discourse units are connected by discourse relations (e.g., DIS:ELABORATION).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representations",
"sec_num": "2.1"
},
{
"text": "In the SKL framework, the learning objects are pairs of translations t 1 , t 2 . Our objective is to automatically learn which pair features are important, independently of the source sentence. We achieve this by using kernel machines (KMs) over two learning objects t 1 , t 2 , t 1 , t 2 , along with an explicit and structural representation of the pairs (see Fig. 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 362,
"end": 368,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "1 The discourse parser can be downloaded from http://alt.qcri.org/tools/ More specifically, KMs carry out learning using the scalar product",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "K mt ( t 1 , t 2 , t 1 , t 2 ) = \u03c6 mt (t 1 , t 2 ) \u2022 \u03c6 mt (t 1 , t 2 ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "where \u03c6 mt maps pairs into the feature space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "Considering that our task is to decide whether t 1 is better than t 2 , we can conveniently represent the vector for the pair in terms of the difference between the two translation vectors, i.e., \u03c6 mt (t 1 , t 2 ) = \u03c6 K (t 1 ) \u2212 \u03c6 K (t 2 ). We can approximate K mt with a preference kernel P K to compute this difference in the kernel space K:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "P K( t 1 , t 2 , t 1 , t 2 ) (1) = K(t 1 ) \u2212 \u03c6 K (t 2 )) \u2022 (\u03c6 K (t 1 ) \u2212 \u03c6 K (t 2 )) = K(t 1 , t 1 ) + K(t 2 , t 2 ) \u2212 K(t 1 , t 2 ) \u2212 K(t 2 , t 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "The advantage of this is that now",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "K(t i , t j ) = \u03c6 K (t i ) \u2022 \u03c6 K (t j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "is defined between two translations only, and not between two pairs of translations. This simplification enables us to map translations into simple trees, e.g., those in Figure 1 , and then to apply them tree kernels, e.g., the Partial Tree Kernel (Moschitti, 2006) , which carry out a scalar product in the subtree space. We can further enrich the representation \u03c6 K , if we consider all the information available to the human judges when they are ranking translations. That is, the two alternative translations along with their corresponding reference.",
"cite_spans": [
{
"start": 248,
"end": 265,
"text": "(Moschitti, 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "In particular, let r and r be the references for the pairs t 1 , t 2 and t 1 , t 2 , we can redefine all the members of Eq. 1, e.g.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "K(t 1 , t 1 ) becomes K( t 1 , r , t 1 , r ) = PTK(\u03c6 M (t 1 , r), \u03c6 M (t 1 , r )) + PTK(\u03c6 M (r, t 1 ), \u03c6 M (r , t 1 )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "where \u03c6 M maps a pair of texts to a single tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "There are several options to produce the bitextto-tree mapping for \u03c6 M . A simple approach is to only use the tree corresponding to the first argument of \u03c6 M . This leads to the basic model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "K( t 1 , r , t 1 , r ) = PTK(\u03c6 M (t 1 ), \u03c6 M (t 1 )) + PTK(\u03c6 M (r), \u03c6 M (r )), i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "e., the sum of two tree kernels applied to the trees constructed by \u03c6 M (we previously informally mentioned it).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "However, this simple mapping may be ineffective since the trees within a pair, e.g., (t 1 , r), are treated independently, and no meaningful features connecting t 1 and r can be derived from their tree fragments. Therefore, we model \u03c6 M (r, t 1 ) by using word-matching relations between t 1 and r, such that connections between words and constituents of the two trees are established using position-independent word matching. For example, in Figure 1 , the thin dashed arrows show the links connecting the matching words between t 1 and r. The propagation of these relations works from the bottom up. Thus, if all children in a constituent have a link, their parent is also linked.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "The use of such connections is essential as it enables the comparison of the structural properties and relations between two translation-reference pairs. For example, the tree fragment [ELABORA-TION [SATELLITE] ] from the translation is connected to [ELABORATION [SATELLITE] ] in the reference, indicating a link between two entire discourse units (drawn with a thicker arrow), and providing some reliability to the translation 2 .",
"cite_spans": [
{
"start": 199,
"end": 210,
"text": "[SATELLITE]",
"ref_id": null
},
{
"start": 250,
"end": 274,
"text": "[ELABORATION [SATELLITE]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "Note that the use of connections yields a graph representation instead of a tree. This is problematic as effective models for graph kernels, which would be a natural fit to this problem, are not currently available for exploiting linguistic information. Thus, we simply use K, as defined above, where the mapping \u03c6 M (t 1 , r) only produces a tree for t 1 annotated with the marker REL representing the connections to r. This marker is placed on all node labels of the tree generated from t 1 that match labels from the tree generated from r.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "In other words, we only consider the trees enriched by markers separately, and ignore the edges connecting both trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernels-based modeling",
"sec_num": "2.2"
},
{
"text": "We experimented with datasets of segment-level human rankings of system outputs from the WMT11 and the WMT12 Metrics shared tasks (Callison-Burch et al., 2011; Callison-Burch et al., 2012) : we used the WMT11 dataset for training and the WMT12 dataset for testing. We focused on translating into English only, for which the datasets can be split by source language: Czech (cs), German (de), Spanish (es), and French (fr). There were about 10,000 non-tied human judgments per language pair per dataset. We scored our pairwise system predictions with respect to the WMT12 human judgments using the Kendall's Tau (\u03c4 ), which was official at WMT12. Table 1 presents the \u03c4 scores for all metric variants introduced in this paper: for the individual language pairs and overall. The left-hand side of the table shows the results when using as similarity the direct kernel calculation between the corresponding structures of the candidate translation and the reference 3 , e.g., as in . The right-hand side contains the results for structured kernel learning.",
"cite_spans": [
{
"start": 130,
"end": 159,
"text": "(Callison-Burch et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 160,
"end": 188,
"text": "Callison-Burch et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 645,
"end": 652,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "We can make the following observations: (i) The overall results for all SKL-trained metrics are higher than the ones when applying direct similarity, showing that learning tree structures is better than just calculating similarity. (ii) Regarding the linguistic representation, we see that, when learning tree structures, syntactic and discourse-based trees yield similar improvements with a slight advantage for the former. More interestingly, when both structures are put together in a combined tree, the improvement is cumulative and yields the best results by a sizable margin. This provides positive evidence towards our goal of a unified tree-based representation with multiple layers of linguistic information. (iii) Comparing to the best evaluation metrics that participated in the WMT12 Metrics shared task, we find that our approach is competitive and would have been ranked among the top 3 participants. Furthermore, our result (0.237) is ahead of the correlation obtained by popular metrics such as TER (0.217), NIST (0.214) and BLEU (0.185) at WMT12. This is very encouraging and shows the potential of our new proposal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "In this paper, we have presented only the first exploratory results. Our approach can be easily extended with richer linguistic structures and further combined with some of the already existing strong evaluation metrics. Note that the results in Table 1 were for training on WMT11 and testing on WMT12 for each language pair in isolation. Next, we study the impact of the choice of training language pair. Table 2 shows cross-language evaluation results for DIS+SYN: lines 1-4 show results when training on WMT11 for one language pair, and then testing for each language pair of WMT12.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "We can see that the overall differences in performance (see the last column: all) when training on different source languages are rather small, ranging from 0.209 to 0.221, which suggests that our approach is quite independent of the source language used for training. Still, looking at individual test languages, we can see that for de-en and es-en, it is best to train on the same language; this also holds for fr-en, but there it is equally good to train on es-en. Interestingly, training on es-en improves a bit for cs-en.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "These somewhat mixed results have motivated us to try tuning on the full WMT11 dataset; as line 5 shows, this yielded improvements for all language pairs except for es-en. Comparing to line 4 in Table 1 , we see that the overall Tau improved from 0.231 to 0.237.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Discussion",
"sec_num": "3"
},
{
"text": "We have presented a pairwise learning-to-rank approach to MT evaluation, which learns to differentiate good from bad translations in the context of a given reference. We have integrated several layers of linguistic information (lexical, syntactic and discourse) in tree-based structures, and we have used the structured kernel learning to identify relevant features and learn pairwise rankers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "The evaluation results have shown that learning in the proposed SKL framework is possible, yielding better correlation (Kendall's \u03c4 ) with human judgments than computing the direct kernel similarity between translation and reference, over the same type of structures. We have also shown that the contributions of syntax and discourse information are cumulative, indicating that this learning framework can be appropriate for the combination of different sources of information. Finally, despite the limited information we used, we achieved better correlation at the segment level than BLEU and other metrics in the WMT12 Metrics task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "In the future, we plan to work towards our longterm goal, i.e., including more linguistic information in the SKL framework and showing that this can help. This would also include more semantic information, e.g., in the form of Brown clusters or using semantic similarity between the words composing the structure calculated with latent semantic analysis (Saleh et al., 2014b) .",
"cite_spans": [
{
"start": 354,
"end": 375,
"text": "(Saleh et al., 2014b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "We further want to show that the proposed framework is flexible and can include information in the form of quality scores predicted by other evaluation metrics, for which a vector of features would be combined with the structured kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "Note that a non-pairwise model, i.e., K(t1, r), could also be used to match the structural information above, but it would not learn to compare it to a second pair (t2, r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Applying tree kernels between the members of a pair to generate one feature (for each different kernel function) has become a standard practice in text similarity tasks(Severyn et al., 2013b) and in question answering(Severyn et al., 2013a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is part of the Interactive sYstems for Answer Search (Iyas) project, conducted by the Arabic Language Technologies (ALT) group at Qatar Computing Research Institute (QCRI) within the Qatar Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Regression for machine translation evaluation at the sentence level. Machine Translation",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Albrecht",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "22",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Albrecht and Rebecca Hwa. 2008. Regression for machine translation evaluation at the sentence level. Machine Translation, 22(1-2):1-27.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meta-) evaluation of machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Burch",
"suffix": ""
},
{
"first": "Cameron",
"middle": [],
"last": "Fordyce",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Josh",
"middle": [],
"last": "Schroeder",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, WMT '07",
"volume": "",
"issue": "",
"pages": "136--158",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Pro- ceedings of the Second Workshop on Statistical Machine Translation, WMT '07, pages 136-158, Prague, Czech Republic.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the 2011 workshop on statistical machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11",
"volume": "",
"issue": "",
"pages": "22--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Pro- ceedings of the Sixth Workshop on Statistical Ma- chine Translation, WMT '11, pages 22-64, Edin- burgh, Scotland, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2012 workshop on statistical machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation, WMT '12",
"volume": "",
"issue": "",
"pages": "10--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical ma- chine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, WMT '12, pages 10-51, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Llu\u00eds M\u00e0rquez, Irene Castell\u00f3n, and Victoria Arranz",
"authors": [
{
"first": "Elisabet",
"middle": [],
"last": "Comelles",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabet Comelles, Jes\u00fas Gim\u00e9nez, Llu\u00eds M\u00e0rquez, Irene Castell\u00f3n, and Victoria Arranz. 2010.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Document-level automatic MT evaluation based on discourse representations",
"authors": [],
"year": null,
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT '10",
"volume": "",
"issue": "",
"pages": "333--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Document-level automatic MT evaluation based on discourse representations. In Proceedings of the Joint Fifth Workshop on Statistical Machine Trans- lation and MetricsMATR, WMT '10, pages 333- 338, Uppsala, Sweden.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Ranking vs. regression in machine translation evaluation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation, WMT '08",
"volume": "",
"issue": "",
"pages": "191--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Duh. 2008. Ranking vs. regression in machine translation evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, WMT '08, pages 191-194, Columbus, Ohio, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Linguistic features for automatic evaluation of heterogenous MT systems",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, WMT '07",
"volume": "",
"issue": "",
"pages": "256--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gim\u00e9nez and Llu\u00eds M\u00e0rquez. 2007. Linguis- tic features for automatic evaluation of heterogenous MT systems. In Proceedings of the Second Work- shop on Statistical Machine Translation, WMT '07, pages 256-264, Prague, Czech Republic.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Using discourse structure improves machine translation evaluation",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics, ACL '14",
"volume": "",
"issue": "",
"pages": "687--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Shafiq Joty, Llu\u00eds M\u00e0rquez, and Preslav Nakov. 2014. Using discourse structure improves machine translation evaluation. In Pro- ceedings of 52nd Annual Meeting of the Association for Computational Linguistics, ACL '14, pages 687- 698, Baltimore, Maryland, USA.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Novel Discriminative Framework for Sentence-Level Discourse Analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "904--915",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2012. A Novel Discriminative Framework for Sentence-Level Discourse Analysis. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 904-915, Jeju Island, Korea.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining Intra-and Multi-sentential Rhetorical Parsing for Documentlevel Discourse Analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL '13",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining Intra-and Multi-sentential Rhetorical Parsing for Document- level Discourse Analysis. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, ACL '13, pages 486-496, Sofia, Bulgaria.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "DiscoTK: Using discourse structure for machine translation evaluation",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT '14",
"volume": "",
"issue": "",
"pages": "402--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Francisco Guzm\u00e1n, Llu\u00eds M\u00e0rquez, and Preslav Nakov. 2014. DiscoTK: Using discourse structure for machine translation evaluation. In Pro- ceedings of the Ninth Workshop on Statistical Ma- chine Translation, WMT '14, pages 402-408, Balti- more, Maryland, USA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The ME-TEOR metric for automatic evaluation of machine translation",
"authors": [
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Translation",
"volume": "23",
"issue": "2-3",
"pages": "105--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alon Lavie and Michael Denkowski. 2009. The ME- TEOR metric for automatic evaluation of machine translation. Machine Translation, 23(2-3):105-115.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Syntactic features for evaluation of machine translation",
"authors": [
{
"first": "Ding",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ding Liu and Daniel Gildea. 2005. Syntactic fea- tures for evaluation of machine translation. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 25-32, Ann Ar- bor, Michigan, USA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fully automatic semantic MT evaluation",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Anand",
"middle": [],
"last": "Karthik Tumuluru",
"suffix": ""
},
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation, WMT '12",
"volume": "",
"issue": "",
"pages": "243--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo, Anand Karthik Tumuluru, and Dekai Wu. 2012. Fully automatic semantic MT evaluation. In Proceedings of the Seventh Workshop on Statisti- cal Machine Translation, WMT '12, pages 243-252, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploiting syntactic and shallow semantic kernels for question answer classification",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Quarteroni",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, ACL '07",
"volume": "",
"issue": "",
"pages": "776--783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting syntactic and shallow semantic kernels for ques- tion answer classification. In Proceedings of the 45th Annual Meeting of the Association of Computa- tional Linguistics, ACL '07, pages 776-783, Prague, Czech Republic.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Efficient convolution kernels for dependency and constituent syntactic trees",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of 17th European Conference on Machine Learning and the 10th European Conference on Principles and Practice of Knowledge Discovery in Databases, ECML/PKDD '06",
"volume": "",
"issue": "",
"pages": "318--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2006. Efficient convolution ker- nels for dependency and constituent syntactic trees. In Proceedings of 17th European Conference on Ma- chine Learning and the 10th European Conference on Principles and Practice of Knowledge Discovery in Databases, ECML/PKDD '06, pages 318-329, Berlin, Germany.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Kernel methods, syntax and semantics for relational text categorization",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM '08",
"volume": "",
"issue": "",
"pages": "253--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2008. Kernel methods, syn- tax and semantics for relational text categorization. In Proceedings of the 17th ACM Conference on In- formation and Knowledge Management, CIKM '08, pages 253-262, Napa Valley, California, USA.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meting of the Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meting of the Association for Com- putational Linguistics, ACL '02, pages 311-318, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word error rates: Decomposition over POS classes and applications for error analysis",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, WMT '07",
"volume": "",
"issue": "",
"pages": "48--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107 and Hermann Ney. 2007. Word error rates: Decomposition over POS classes and applica- tions for error analysis. In Proceedings of the Sec- ond Workshop on Statistical Machine Translation, WMT '07, pages 48-55, Prague, Czech Republic.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The use of classifiers in sequential inference",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems 14, NIPS '01",
"volume": "",
"issue": "",
"pages": "995--1001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok and Dan Roth. 2001. The use of classifiers in sequential inference. In Advances in Neural Information Processing Systems 14, NIPS '01, pages 995-1001, Vancouver, Canada.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A study of using syntactic and semantic structures for concept segmentation and labeling",
"authors": [
{
"first": "Iman",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Cyphers",
"suffix": ""
},
{
"first": "Jim",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics, COLING '14",
"volume": "",
"issue": "",
"pages": "193--202",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iman Saleh, Scott Cyphers, Jim Glass, Shafiq Joty, Llu\u00eds M\u00e0rquez, Alessandro Moschitti, and Preslav Nakov. 2014a. A study of using syntactic and se- mantic structures for concept segmentation and la- beling. In Proceedings of the 25th International Conference on Computational Linguistics, COLING '14, pages 193-202, Dublin, Ireland.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semantic kernels for semantic parsing",
"authors": [
{
"first": "Iman",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "14",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iman Saleh, Alessandro Moschitti, Preslav Nakov, Llu\u00eds M\u00e0rquez, and Shafiq Joty. 2014b. Semantic kernels for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP '14, Doha, Qatar.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Structural relationships for large-scale learning of answer re-ranking",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12",
"volume": "",
"issue": "",
"pages": "741--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2012. Structural relationships for large-scale learning of answer re-ranking. In Proceedings of the 35th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '12, pages 741-750, Portland, Oregon, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning adaptable patterns for passage reranking",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Nicosia",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning, CoNLL '13",
"volume": "",
"issue": "",
"pages": "75--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013a. Learning adaptable patterns for passage reranking. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning, CoNLL '13, pages 75-83, Sofia, Bulgaria.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning semantic textual similarity with structural representations",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Nicosia",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "714--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn, Massimo Nicosia, and Alessandro Moschitti. 2013b. Learning semantic textual sim- ilarity with structural representations. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), ACL '13, pages 714-718, Sofia, Bulgaria.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas, AMTA '06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Ameri- cas, AMTA '06, Cambridge, Massachusetts, USA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Regression and ranking based optimisation for sentence-level MT evaluation",
"authors": [
{
"first": "Xingyi",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11",
"volume": "",
"issue": "",
"pages": "123--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xingyi Song and Trevor Cohn. 2011. Regression and ranking based optimisation for sentence-level MT evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT '11, pages 123-129, Edinburgh, Scotland, UK.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology -Volume 1, HLT-NAACL '03, pages 173-180, Ed- monton, Canada.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Extending machine translation evaluation metrics with lexical cohesion to document level",
"authors": [
{
"first": "Billy",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "1060--1068",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Billy Wong and Chunyu Kit. 2012. Extending ma- chine translation evaluation metrics with lexical co- hesion to document level. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 1060-1068, Jeju Island, Korea.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic learning of textual entailments with cross-pair similarities",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Massimo Zanzotto",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLING-ACL '06",
"volume": "",
"issue": "",
"pages": "401--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabio Massimo Zanzotto and Alessandro Moschitti. 2006. Automatic learning of textual entailments with cross-pair similarities. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associ- ation for Computational Linguistics, COLING-ACL '06, pages 401-408, Sydney, Australia.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Hypothesis and reference trees combining discourse, shallow syntax and POS.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Figure 1 shows two example trees combining discourse, shallow syntax and POS: one for a translation hypothesis (top) and the other one for the reference (bottom). To build such structures, we used the Stanford POS tagger (Toutanova et al., 2003), the Illinois chunker (Punyakanok and Roth, 2001), and the discourse parser 1 of (Joty et al., 2012; Joty et al., 2013). The lexical items constitute the leaves of the tree. The words are connected to their respective POS tags, which are in turn grouped into chunks. Then, the chunks are grouped into elementary discourse units (EDU), to which the nuclearity status is attached (i.e., NUCLEUS or SATELLITE). Finally, EDUs and higher-order discourse units are connected by discourse relations (e.g., DIS:ELABORATION).",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Testing Train cs-en de-en es-en fr-en all 1 cs-en 0.210 0.204 0.217 0.204 0.209 2 de-en 0.196 0.251 0.203 0.202 0.213 3 es-en 0.218 0.204 0.240 0.223 0",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table><tr><td colspan=\"3\">a) Hypothesis</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"3\">DIS:ELABORATION</td><td/></tr><tr><td>RB not</td><td>VP TO-REL to</td><td colspan=\"3\">EDU:NUCLEUS NP-REL VB-REL PRP-REL give them</td><td>DT the</td><td>NP</td><td>NN-REL time</td><td colspan=\"3\">EDU:SATELLITE-REL VP-REL o-REL TO-REL VB-REL .-REL to think .</td><td>o-REL ''-REL \"</td><td>relation propagation direction</td></tr><tr><td/><td/><td/><td/><td colspan=\"4\">Bag-of-words relations</td><td/><td/></tr><tr><td/><td>to</td><td>\"</td><td>give</td><td colspan=\"2\">them</td><td>no</td><td>time</td><td>to</td><td>think</td><td>.</td><td>\"</td></tr><tr><td/><td colspan=\"3\">TO-REL``VB-REL</td><td colspan=\"3\">PRP-REL DT</td><td colspan=\"2\">NN-REL TO-REL</td><td>VB-REL</td><td>.-REL</td><td>''-REL</td></tr><tr><td/><td/><td>VP</td><td/><td colspan=\"2\">NP-REL</td><td/><td>NP</td><td colspan=\"2\">VP-REL</td><td>o-REL</td><td>o-REL</td></tr><tr><td/><td/><td/><td colspan=\"2\">EDU:NUCLEUS</td><td/><td/><td/><td/><td colspan=\"2\">EDU:SATELLITE-REL</td></tr><tr><td colspan=\"2\">b) Reference</td><td/><td/><td/><td/><td/><td colspan=\"2\">DIS:ELABORATION</td><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ")."
},
"TABREF1": {
"content": "<table><tr><td/><td>Similarity</td><td/><td/><td colspan=\"2\">Structured Kernel Learning</td></tr><tr><td colspan=\"2\">Structure cs-en de-en es-en</td><td>fr-en</td><td>all</td><td>cs-en de-en es-en</td><td>fr-en</td><td>all</td></tr><tr><td>1 SYN</td><td colspan=\"6\">0.169 0.188 0.203 0.222 0.195 0.190 0.244 0.198 0.158 0.198</td></tr><tr><td>2 DIS</td><td colspan=\"6\">0.130 0.174 0.188 0.169 0.165 0.176 0.235 0.166 0.160 0.184</td></tr><tr><td>3 DIS+POS</td><td colspan=\"6\">0.135 0.186 0.190 0.178 0.172 0.167 0.232 0.202 0.133 0.183</td></tr><tr><td>4</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "DIS+SYN 0.156 0.205 0.206 0.203 0.192 0.210 0.251 0.240 0.223 0.231"
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Kendall's (\u03c4 ) correlation with human judgements on WMT12 for each language pair."
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Kendall's (\u03c4 ) on WMT12 for crosslanguage training with DIS+SYN."
}
}
}
}