ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.137.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:52:44.690043Z"
},
"title": "Sentiment-Aware Measure (SAM) for Evaluating Sentiment Transfer by Machine Translation Systems",
"authors": [
{
"first": "Hadeel",
"middle": [],
"last": "Saadany",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RGCL University of Wolverhampton Wolverhampton",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Constantin",
"middle": [],
"last": "Or\u0203san",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Surrey Guildford",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Emad",
"middle": [],
"last": "Mohamed",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "RGCL University of Wolverhampton",
"location": {
"settlement": "Wolverhampton",
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Ashraf",
"middle": [],
"last": "Tantavy",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In translating text where sentiment is the main message, human translators give particular attention to sentiment-carrying words. The reason is that an incorrect translation of such words would miss the fundamental aspect of the source text, i.e. the author's sentiment. In the online world, MT systems are extensively used to translate User-Generated Content (UGC) such as reviews, tweets, and social media posts, where the main message is often the author's positive or negative attitude towards the topic of the text. It is important in such scenarios to accurately measure how far an MT system can be a reliable real-life utility in transferring the correct affect message. This paper tackles an under-recognised problem in the field of machine translation evaluation which is judging to what extent automatic metrics concur with the gold standard of human evaluation for a correct translation of sentiment. We evaluate the efficacy of conventional quality metrics in spotting a mistranslation of sentiment, especially when it is the sole error in the MT output. We propose a numerical 'sentiment-closeness' measure appropriate for assessing the accuracy of a translated affect message in UGC text by an MT system. We will show that incorporating this sentimentaware measure can significantly enhance the correlation of some available quality metrics with the human judgement of an accurate translation of sentiment.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In translating text where sentiment is the main message, human translators give particular attention to sentiment-carrying words. The reason is that an incorrect translation of such words would miss the fundamental aspect of the source text, i.e. the author's sentiment. In the online world, MT systems are extensively used to translate User-Generated Content (UGC) such as reviews, tweets, and social media posts, where the main message is often the author's positive or negative attitude towards the topic of the text. It is important in such scenarios to accurately measure how far an MT system can be a reliable real-life utility in transferring the correct affect message. This paper tackles an under-recognised problem in the field of machine translation evaluation which is judging to what extent automatic metrics concur with the gold standard of human evaluation for a correct translation of sentiment. We evaluate the efficacy of conventional quality metrics in spotting a mistranslation of sentiment, especially when it is the sole error in the MT output. We propose a numerical 'sentiment-closeness' measure appropriate for assessing the accuracy of a translated affect message in UGC text by an MT system. We will show that incorporating this sentimentaware measure can significantly enhance the correlation of some available quality metrics with the human judgement of an accurate translation of sentiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Standard quality measures for assessing the performance of machine translation systems, such as BLEU (Papineni et al., 2002) , are domain agnos-tic; they evaluate the translation accuracy regardless of the semantic domain or linguistic peculiarities of the source text. Consequently, they give equal penalty weight to inaccurate translation of n-grams, which may lead to performance overestimation (or underestimation). For example, the Arabic high-rated Goodreads book review showing the reviewers overall satisfaction of a novel: '",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "('The novel is great, its only flaw is the last part') is mistranslated by the available online translation tool which wrongly outputs a negative sentiment ('The novel is terrible, its only flaw is the last part'). Despite the distortion of the affect message, this translation receives an equally high score as a correct translation (The story is great, its only flaw is the last part), but which uses story instead of novel. This is because BLEU mildly penalises the wrong translation as it swaps only one uni-gram ('great') with its opposite ('terrible'). Yet, the mistranslation of this particular uni-gram is a critical error as it is most pivotal in transferring the sentiment, and hence the MT performance is over-estimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been numerous efforts to address the common pitfalls of n-gram-based evaluation metrics by incorporating semantic and contextual features. However, despite research evidence of its analytical limitations, BLEU is still the de facto standard for MT performance evaluation (Mathur et al., 2020a; Reiter, 2018) . Moreover, although the introduction of more semantically-oriented metrics showed a better correlation with human judgement, still the estimation of sentiment preservation in UGC has not yet been investigated. This is par-tially due to the fact that the domain and linguistic style of the WMTs datasets typically used for metric evaluation (e.g. newstest2020, Canadian Parliament, Wikipedia, UN corpus) is quite different than the non-standard noisy UGC where sentiment is the main content of a telegraphic message (Ma et al., 2018; Barrault et al., 2019; Mathur et al., 2020b) . Assessed over these WMT datasets, some metrics manifest an almost perfect correlation with human evaluation on the segment-level (e.g. WMT20 participating metrics record results of up to 0.97 Pearson correlation on the newstest2020) (Mathur et al., 2020b) . However, research has also shown that metrics usually report weaker correlation with low human assessment score ranges (Takahashi et al., 2020a,b) .",
"cite_spans": [
{
"start": 282,
"end": 304,
"text": "(Mathur et al., 2020a;",
"ref_id": "BIBREF11"
},
{
"start": 305,
"end": 318,
"text": "Reiter, 2018)",
"ref_id": "BIBREF20"
},
{
"start": 835,
"end": 852,
"text": "(Ma et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 853,
"end": 875,
"text": "Barrault et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 876,
"end": 897,
"text": "Mathur et al., 2020b)",
"ref_id": null
},
{
"start": 1133,
"end": 1155,
"text": "(Mathur et al., 2020b)",
"ref_id": null
},
{
"start": 1277,
"end": 1304,
"text": "(Takahashi et al., 2020a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we argue that the high correlation of some metrics may not be replicated with a different domain such as sentiment-oriented UGC, specifically when there is a mistranslation of sentiment-critical word(s). For this reason, we propose a 'sentiment-closeness' measure that can accommodate for a better evaluation of the MT system's ability to capture the correct sentiment of the source text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To present our sentiment quality measure, we first briefly explain in Section 2 why metrics commonly used for MT quality estimation may not always be efficient in assessing sentiment-critical translation errors. In Section 3, we present an experiment to quantitatively assess the divergence of the analysed metrics from a human judgement of a correct/incorrect translation of an affect message. Section 4 presents our proposed solution for finetuning quality metrics for a better correlation with human judgement. Section 5 presents the results of incorporating our sentiment-measure in different quality metrics. In Section 6 we conduct an error analysis of our empirical approach and discuss its limitations. Finally, Section 7 presents a conclusion on our conducted experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Automatic evaluation metrics usually take the output of an MT system (hypothesis) and compare it to one or several translations produced by human translator(s) (reference). Based on their matching methods, the most commonly used automatic metrics can be broadly categorised into: surface ngram matching and embedding matching. Surface n-gram methods work by calculating exact match-ing, heuristic matching or an edit distance between the aligned n-grams of the reference and hypothesis translation(s). The embedding methods, on the other hand, calculate a similarity score between learned token representations, such as contextual embedding vectors, with or without the aid of external linguistic tools. In the following sections, we briefly explain the methods behind three canonical metrics as representative of each category. We illustrate why the theoretical foundation of each metric may not be optimum for evaluating the translation of sentiment-oriented UGC.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Available Metrics",
"sec_num": "2"
},
{
"text": "BLEU The standard metric for assessing empirical improvement of MT systems is BLEU (Papineni et al., 2002) . Simply stated, the objective of BLEU is to compare n-grams of the candidate translation with n-grams of the reference translation and count the number of matches; the more the matches, the better the candidate translation. The final score is calculated using a modified n-gram precision multiplied by a brevity penalty so that a good candidate translation would match the reference translation in length, in word choice, and in word order. The disadvantage of the BLEU metric which is relevant to our present study is that it treats all n-grams equally. Due to its restrictive surface n-gram matching, it does not account for the semantic importance of an n-gram in the context of a text. Accordingly, BLEU would incorrectly give a high score to an MT output if it scores exact match with the reference except for one uni-gram, even if this uni-gram completely changes the sentiment of a text (e.g. 'terrible' and 'great' as in the Goodreads example above). Online built-in MT tools have been shown to frequently transfer the exact opposite sentiment word for some dialectical expressions in UGC translated into English (Saadany and Orasan, 2020) . Therefore, the BLEU evaluation of an MT performance would be misleadingly over-permissive in such cases where only one or two sentiment-critical words are mistranslated.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF16"
},
{
"start": 1229,
"end": 1255,
"text": "(Saadany and Orasan, 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "METEOR METEOR (Banerjee and Lavie, 2005) incorporates semantic information as it evaluates translation by calculating either exact match, stem match, or synonymy match. For synonym matching, it utilises WordNet synsets (Pedersen et al., 2004) . More recent versions (METEOR 1.5 and METEOR++2.0) apply also importance weighting by giving smaller weight to function words (Denkowski and Lavie, 2014; Guo and Hu, 2019) . The METEOR score ranges from 0 (worst translation) to 1 (best translation). There are two shortcomings to the METEOR metric which do not make it a robust solution for evaluating sentiment transfer. First, the synonym matching is limited to checking whether the two words belong to the same synset in WordNet. However, WordNet synonymy classification is different than regular thesauruses. For example, 'glad' is a synset of 'happy' and hence considered a synonym, whereas 'cheerful' is not a direct synset to 'happy' and hence would be considered a mismatch by METEOR. The following two examples illustrate further the limitations of using WordNet synonymy by METEOR:",
"cite_spans": [
{
"start": 14,
"end": 40,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF1"
},
{
"start": 219,
"end": 242,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF17"
},
{
"start": 370,
"end": 397,
"text": "(Denkowski and Lavie, 2014;",
"ref_id": "BIBREF3"
},
{
"start": 398,
"end": 415,
"text": "Guo and Hu, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "Example 1 Scores: [METEOR: 0.46]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "\u2022 Hypothesis: \"The weather is sunny, what a happy day\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "\u2022 Reference: \"The sun is shining, what a cheerful day\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "Example 2 Scores: [METEOR: 0.48]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "\u2022 Hypothesis: \"I'm not sure why, but I feel so happy today\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "\u2022 Reference: \"I don't get it, but I feel so sad today\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "The METEOR scores for Examples 1 and 2 clearly diverge from a human evaluation of a good translation. In the first example, although the translation conveys the correct emotion ('happiness'), it receives a similar METEOR score to the hypothesis in the second example which gives the exact opposite emotion of the source ('happiness' instead of 'sadness'). The inadequate scoring is a result of the WordNet taxonomy which causes the metric to equally treat both pairs ('sad', 'happy') and ('cheerful', 'happy') as non-synonym synsets and hence unmatched chunks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "The second problem with METEOR which may affect its efficacy in evaluating sentiment transfer relates to its weighting schema for function and non-function words. The following example clarifies the gravity of this problem:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Surface N-gram Matching Metrics",
"sec_num": "2.1"
},
{
"text": "Scores: [METEOR: 0.92]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "\u2022 Hypothesis: \"If he had blown himself up in your country, God would forgive him\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "\u2022 Reference: \"If he had blown himself up in your country, God would not forgive\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "The hypothesis in Example 3 is the translation output of Twitter's built-in MT system for an Arabic tweet commenting on a terrorist attack 1 . The MT failure to translate the negation marker flips the sentiment of the author from 'anger' against the terrorist to 'sympathy'. Despite this, the METEOR score is 0.92 which is within the highest upper bound, ranking it as a good translation. On the other hand, the METEOR score for a correct translation of sentiment with a negation marker is 0.93. The main culprit for this inaccurate scoring is the lexical weighting which causes the metric not to penalise the missing of a negation marker which produces a sentiment-critical error. Due to the grave consequences of such mistranslations, it becomes critical to have a sentiment-sensitive metric that is capable of spotting similar errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 3",
"sec_num": null
},
{
"text": "BERTScore Recently embedding-based metrics have proven to achieve the highest performance in recent WMT shared tasks for quality metrics (e.g. Sellam et al. (2020) ; Lo (2020); Mukherjee et al. (2020)). We take BERTScore as a representative metric for this approach (Zhang et al., 2019) . BERTScore computes a score based on a pair wise cosine similarity between the BERT contextual embeddings of the individual tokens for the hypothesis and the reference (Devlin et al., 2018) . Accordingly, a BERTScore close to 1 indicates proximity in vector space and hence a good translation. The main problem with embedding-based metrics, such as BERTScore, is that antonyms contain similar distributional information since they usually occur in similar contexts. Example 4 illustrates this point:",
"cite_spans": [
{
"start": 143,
"end": 163,
"text": "Sellam et al. (2020)",
"ref_id": null
},
{
"start": 266,
"end": 286,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 456,
"end": 477,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based Metrics",
"sec_num": "2.2"
},
{
"text": "Scores: [BERTScore: 0.85]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "\u2022 Hypothesis: \"What is this amount of anger, I don't understand!\" 1220",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "\u2022 Reference: \"What is this amount of happiness, I don't understand!\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "Example 4 shows the mistranslation produced by Twitter's Translate Tweet tab of an Arabic tweet. Although the sentiment polarity is flipped in the candidate translation above, the hypothesis receives a BERTScore of 0.85 which indicates a high cosine similarity to the reference in vector space and hence a good translation. Clearly, the metric score is not comparable to a human perception of the emotion reflected by the source. Figure 1 illustrates the reason behind this misleadingly high score. Figure 1 is a 2-D visualisation of BERT's contextual embedding vectors for the hypothesis translation (in blue) and the reference (in red). Both sentences are very close in the embedding space due to the exact match of their individual tokens. The only mismatch is between the antonyms 'happiness' and 'anger'. As shown in the figure, the pre-trained embedding vectors of the opposite polarity nouns are also quite close because of their common contextual information. An embedding metric such as BERTScore, therefore, may not penalise antonyms which typically occur in similar contexts. Recently, there have been different approaches to overcome some distributional problems of contextual embeddings. Reimers and Gurevych (2019) , for example, introduce SBert, a modification of the pretrained BERT network, which should mitigate the antonymy problem. They use Siamese network structures where the embeddings of similar sentence pairs are independently learned via two parallel transformer architectures. We measured how far this technique could solve the opposite sentiment problem by measuring the SBert sentence similarity of the hypothesis and reference in Example 4. The cosine similarity of the SBert sentence embedding vectors for the hypothesis and reference in Example 4 reached 0.61. A correct translation of the reference, however, such as 'What is all this cheerfulness, I don't understand' has a cosine similarity score of 0.79. This small similarity difference (0.18) would be misleading if taken as an evaluation of how far the sentiment poles in the two hypotheses are different. A more sentiment-targeted measure is needed for assessing mistranslations due to this antonymy problem.",
"cite_spans": [
{
"start": 1201,
"end": 1228,
"text": "Reimers and Gurevych (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 430,
"end": 438,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 499,
"end": 507,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "In the following section, we conduct an experiment to quantify the divergence of the above mentioned metrics from the human perception of a proper sentiment transfer in a translated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example 4",
"sec_num": null
},
{
"text": "We have shown in the previous section that three canonical MT evaluation metrics do not give a penalty proportional to sentiment-critical errors on segment-level by an MT online tool. In order to quantify how far the aforementioned quality metrics diverge or correlate with human judgement of sentiment transfer, we measure the performance of each metric on a dataset of tweets that had sentiment-critical translation errors. To compile this data, first we used the Twitter built-in translation system (Google API) to translate a dataset of tweets annotated for sentiment 2 . The source dataset amounted to \u22487,000 tweets in three languages: English, Arabic and Spanish. We translated the Arabic and Spanish into English and the English was translated into Spanish and Arabic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "To extract instances where the MT system failed to translate the sentiment correctly, we built an English sentiment-detection classifier by fine-tuning a Roberta XML model (Liu et al., 2019) on an English dataset of 23,000 tweets annotated for sentiment 3 . The English classifier was used to predict the sentiment of the Google API output for the translation of the Arabic and Spanish tweets into English and the English back-translation of the English tweets translated into Spanish and Arabic. The classifier's predicted sentiment was compared to the gold standard emotion of the source text, and instances of discrepancy were extracted as potential mistranslations of sentiment. Finally, from the extracted instances, we manually built a translation quality evaluation dataset.",
"cite_spans": [
{
"start": 172,
"end": 190,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "The quality evaluation dataset, henceforth QE, consisted of target tweets where the error is exclusively a mistranslation of the sentiment-carrying lexicon. In these tweets, the mistranslations either completely flip the sentiment polarity of the source tweet, similar to Examples 3 and 4 above, or transfer the same polarity but with a mitigated sentiment tone. The tweets with exclusive sentiment translation errors amounted to 300 tweets. We also added 100 tweet/translation pairs where the MT system transfers the correct sentiment. Reference translations of the QE dataset were created by native speakers of the respective source languages. Essentially, the reference translations aimed at correcting chunks that caused a distortion of the affect message and retained as many of the hypothesis n-grams as possible to detect how far each metric is sensitive to sentiment mismatching and not to the mismatch in other non-sentiment carrying words. The translators were also asked to assign a score to each pair of source-hypothesis tweet, where 1 is the poorest sentiment transfer and 10 is best sentiment transfer. The average scores of annotators were taken as the final human score 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "To quantify the ability of the three metrics explained in Section 2 to assess the transfer of sentiment in the QE dataset, we compared their scores of the translation hypotheses with the human judgement scores for sentiment transfer on the segmentlevel. We followed the WMT standard methods for evaluating quality metrics and used absolute Pearson correlation coefficient r and the Kendall correlation coefficient |\u03c4 | to evaluate each metric's performance against the human judgement. Figures 2 and 3 show heatmaps visualising the Pearson and Kendall correlation coefficients for the studied metrics and the human scores, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 502,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "As seen from Figures 2 and 3 , both BERTScore and METEOR achieve a better correlation with the human judgement than BLEU 5 which achieves only 0.16 and 0.12 Pearson and Kendall correlations, respectively. However, the relatively overall low correlations (max r = 0.39 and max |\u03c4 | = 0.27) raise important doubts as to the reliability of these 4 Annotators were computational linguists working on MT research. 5 We use the Sacrebleu implementation of the BLEU score for all the experiments (Post, 2018) . accepted metrics for ranking MT systems which translate sentiments. Bearing in mind that 75% of the segments in the QE dataset have critical translation errors that seriously distort the sentiment, the low correlation results highlight the need for a sentiment-targeted measure that can improve a metric's efficacy in capturing mistranslated sentiment by an MT system in real-life scenarios.",
"cite_spans": [
{
"start": 409,
"end": 410,
"text": "5",
"ref_id": null
},
{
"start": 489,
"end": 501,
"text": "(Post, 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 13,
"end": 28,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "4 Sentiment-Aware Measure (SAM) for Machine Translated UGC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "In this section, we propose a new measure for assessing MT performance that takes into account the sentiment similarity between the MT system translation and the reference. This sentiment measure should be used as a fine-tuning tool to adjust a quality score in cases where it is used to assess the translation quality of sentiment-oriented text. The SAM score is calculated by using the SentiWord dictionary of prior polarities (Gatti et al., 2015) . SentiWord is a sentiment lexicon that combines the high precision of manual lexica and the high coverage of automatic ones (covering 155,000 words). It is based on assigning a 'prior polarity' score for each lemma-POS in both SentiWordNet and a number of human-annotated sentiment lexica (Baccianella et al., 2010; Warriner et al., 2013) . The prior polarity is the out-of-context positive or negative score which a lemma-POS evokes. It is reached via an ensemble learning framework that combines several formulae where each lemma-POS is given the score that receives the highest number of votes from the different formulae. SentiWord prior polarity scores have been proven to achieve the highest correlation with human scores in sentiment analysis regression and classification SemEval tasks (Gatti et al., 2015) . We assume that our sentiment adjustment factor, SAM, is proportional to the distance between the sentiment scores of the unmatched words in the system translation and the reference, the higher the distance the greater the SAM adjustment. To calculate the SAM score, we designate the number of remaining mismatched words in the system translation and reference translation by m and n, respectively. We calculate the total SentiWord sentiment score for the lemma-POS of the mismatched words in the translation and reference sentences using a weighted average of the sentiment score of each mismatched lemma-POS. The weight of a hypothesis mismatched word w h and a reference mismatched word w r is calculated based on the sentiment score of its lemma-POS, s, as follows:",
"cite_spans": [
{
"start": 429,
"end": 449,
"text": "(Gatti et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 740,
"end": 766,
"text": "(Baccianella et al., 2010;",
"ref_id": "BIBREF0"
},
{
"start": 767,
"end": 789,
"text": "Warriner et al., 2013)",
"ref_id": "BIBREF25"
},
{
"start": 1245,
"end": 1265,
"text": "(Gatti et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "w i h = |s i | i = 1, 2, . . . , m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w i r = |s i | i = 1, 2, . . . , n.",
"eq_num": "(2)"
}
],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "Then the total sentiment score for hypothesis S h and reference S r is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "S h = m i=1 \u03b1 i s i , \u03b1 i = w i h m i=1 w i h (3) S r = n i=1 \u03b2 i s i , \u03b2 i = w i r n i=1 w i r (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "The normalised SAM adjustment is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = |S r \u2212 S h | 2",
"eq_num": "(5)"
}
],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "and the translation final score will be given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Score = C hr (1 \u2212 p)",
"eq_num": "(6)"
}
],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "where C hr can be any metric's matching score between a translation hypothesis and a reference segment. For this experiment, we will measure C hr as the BLEU, METEOR and BERTScore scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "To illustrate how SAM score adjusts a metric score with respect to the transfer of sentiment , table 1 shows the SAM score adjustment for Examples 3 and 4 above. As can be seen from Table 1 , the metric score C hr (here METEOR and BERTScore, respectively) is significantly reduced due to the sentiment distance between the mismatched words (w h , w r ) as well as their sentiment weights (S h , S r ). The reduced scores are more representative of the distortion of sentiment produced by the MT system in Examples 3 and 4. The SAM adjustment, therefore, is a targeted-measure that can fine-tune a metric according to the 'sentiment-closeness' of the translation and the reference 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 102,
"text": ", table 1",
"ref_id": "TABREF0"
},
{
"start": 182,
"end": 189,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation of Mistranslated Sentiment",
"sec_num": "3"
},
{
"text": "To check how far the SAM measure can improve the evaluation of sentiment, we assessed the performance of the three chosen metrics on the QE dataset utilised in the experiment in Section 3 with the SAM adjustment. Figures 4 and 5 show heatmaps of the Pearson and Kendall correlations of the SAM adjusted metrics with the human judgement in the QE dataset. Compared to metric scores without the SAM adjustment, Figures 4 and 5 show that the combination of SAM with the three metrics consistently leads to an improvement in each metric's correlation with the human judgement of a correct sentiment transfer in our dataset. Overall, BERTScore records the highest Pearson correlation coefficients and Kendall rank dependence (0.59 and 0.44, respectively). This means that it is better able to penalise critical translation errors in our QE dataset. Moreover, compared to their scores without SAM, 1223 both BERTScore and METEOR record 20% and 18% higher correlation, respectively. It is also worthy of notice, that although the Pearson correlation of BLEU improves from 0.16 to 0.33 with the SAM adjustment, the correlation is still relatively small. Knowing that BLEU score is usually the gate-keeping tool for accepting improvement in MT research, the results cast doubt on the efficacy of this non-semantic method to penalise sentimentcritical translation errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 228,
"text": "Figures 4 and 5",
"ref_id": "FIGREF3"
},
{
"start": 409,
"end": 424,
"text": "Figures 4 and 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Assessing SAM performance",
"sec_num": "5"
},
{
"text": "Human sentiment can be expressed in intricately subtle ways so that the mistranslation of the affect message is not necessarily reflected in divergence of polarity scores. We conducted an error analysis on instances where the SAM adjustment scores were not able to capture the MT's failure to transfer the correct sentiment due to different linguistic phenomena. The first phenomenon we identified is related to structure shifting. For example, the sentiment distance between the source tweet 'I was saddened by him' and its mistranslation 'I made him sad' is very small despite the flipping of sentiment direction in the translation. The three metrics with and without the SAM adjustment failed to penalise this type of distortion in the QE dataset. Second, some words in the lexicon did not have a score representative of their sentiment weight. For example, most prepositions in the SentiWord lex-icon are neutral, yet by checking the data, it was found out that a mistranslation of a preposition can distort the affect message. For example, the source tweet 'What is the benefit of me in this world' was mistranslated by the MT system as 'What is the benefit for me in this world'; the wrong preposition causes the translation to lose the sad tone in the source tweet. Again, similar instances were not adequately measured with the three metrics with and without the SAM adjustment. Third, some nuanced sentiment-carrying words specific for the informal style of tweets caused a mistranslation of sentiment which was not captured by our approach. For instance, a one-word tweet referring to a political figure as 'Prick' was mistranslated as 'Sting'. The source is a slang word used to refer to a mean, contemptible man. The translation wrongly received a high score since both 'prick' and 'sting' had very similar negative sentiment weights. The translation, however, fails to reflect the aggressive sentiment in the source tweet. The BLEU metric, on the other hand, succeeded in giving a penalty to similar short mistranslated tweets without the SAM adjustment. One other limitation to the current approach for assessing the translation of sentiment is that it relies on an English sentiment lexicon. The applicability of this approach to other languages depends on the availability of a similar high-precision and highcoverage sentiment lexicon. We have overcome this problem by using the English backtranslations of the Arabic and Spanish tweets in the QE dataset. There are, however, multilingual translations of English sentiment lexica that are commonly used in the sentiment-analysis of non-English text. It remains to be tested how far a translated sentiment lexicon is capable of measuring sentiment transfer among different language pairs by the SAM scoring approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis and Limitations",
"sec_num": "6"
},
{
"text": "The most frequent scenario in which an MT system is used to translate sentiment-oriented text is for the translation of online UGC such as tweets, reviews or social media posts. The users of the MT online tool take these translations at face-value as there is no human intervention for accuracy checking. It is important, therefore, to ensure the reliability of the MT system to accurately transfer the author's affect message before it is used as an online tool. The reliability of an MT system with such big data is commonly measured by automatic metrics. We have shown that conventionally accepted metrics may not always be an optimum solution for assessing the translation of sentiment-oriented UGC. We presented an empirical approach to quantify the notion of sentiment transfer by an MT system and, more concretely, to modify automatic metrics such that its MT ranking comes closer to a human judgement of a poor or good translation of sentiment. Despite limitations to our approach, the SAM adjustment serves as a proxy to the complicated task of manually evaluating the translation of sentiment across different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://twitter.com/gaston810/status/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset is collected from SemEval 2018 shared sentiment detection Task (Mohammad et al., 2018).3 The English dataset is collected from different sentiment detection tasks(Lo, 2019;Mohammad and Bravo-Marquez, 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research work has been supported in part by the TranSent project at the Centre for Translation Studies (CTS) of the University of Surrey.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Appendix A Examples for Mistranslated Tweets Measured with and without SAM adjustment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Sentiwordnet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "Stefano",
"middle": [],
"last": "Baccianella",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Esuli",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
}
],
"year": 2010,
"venue": "Lrec",
"volume": "10",
"issue": "",
"pages": "2200--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. Sentiwordnet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Min- ing. In Lrec, volume 10, pages 2200-2204.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Findings of the 2019 conference on machine translation (WMT19)",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "1--61",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5301"
]
},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Meteor universal: Language specific translation evaluation for any target language",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "376--380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Alon Lavie. 2014. Meteor uni- versal: Language specific translation evaluation for any target language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 376-380.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sentiwords: Deriving a high precision and high coverage lexicon for sentiment analysis",
"authors": [
{
"first": "Lorenzo",
"middle": [],
"last": "Gatti",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Guerini",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Transactions on Affective Computing",
"volume": "7",
"issue": "4",
"pages": "409--421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lorenzo Gatti, Marco Guerini, and Marco Turchi. 2015. Sentiwords: Deriving a high precision and high cov- erage lexicon for sentiment analysis. IEEE Transac- tions on Affective Computing, 7(4):409-421.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "METEOR++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation",
"authors": [
{
"first": "Yinuo",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "501--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinuo Guo and Junfeng Hu. 2019. METEOR++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 2: Shared Task Papers, Day 1), pages 501-506.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "YiSi-A unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo. 2019. YiSi-A unified semantic MT quality evaluation and estimation metric for languages with different levels of available resources. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1), pages 507-513.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extended study on using pretrained language models and YiSi-1 for machine translation evaluation",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "895--902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo. 2020. Extended study on using pretrained language models and YiSi-1 for machine translation evaluation. In Proceedings of the Fifth Conference on Machine Translation, pages 895-902.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Results of the WMT18 metrics shared task: Both characters and embeddings achieve good performance",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "671--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Ond\u0159ej Bojar, and Yvette Graham. 2018. Results of the WMT18 metrics shared task: Both characters and embeddings achieve good perfor- mance. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 671-688.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.06264"
]
},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Tim Baldwin, and Trevor Cohn. 2020a. Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics. arXiv preprint arXiv:2006.06264.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Qingsong Ma, and Ond\u0159ej Bojar. 2020b. Results of the WMT20 Metrics Shared Task",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "688--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Johnny Wei, Markus Freitag, Qing- song Ma, and Ond\u0159ej Bojar. 2020b. Results of the WMT20 Metrics Shared Task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "WASSA-2017 shared task on emotion intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis (WASSA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad and Felipe Bravo-Marquez. 2017. WASSA-2017 shared task on emotion intensity. In Proceedings of the Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis (WASSA), Copenhagen, Denmark.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalua- tion (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mee: An automatic metric for evaluation using embeddings for machine translation",
"authors": [
{
"first": "Ananva",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Hema",
"middle": [],
"last": "Ala",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)",
"volume": "",
"issue": "",
"pages": "292--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ananva Mukherjee, Hema Ala, Manish Shrivastava, and Dipti Misra Sharma. 2020. Mee: An automatic metric for evaluation using embeddings for machine translation. In 2020 IEEE 7th International Con- ference on Data Science and Advanced Analytics (DSAA), pages 292-299. IEEE.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Wordnet: Similarity-measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "AAAI",
"volume": "4",
"issue": "",
"pages": "25--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, Jason Michelizzi, et al. 2004. Wordnet: Similarity-measuring the re- latedness of concepts. In AAAI, volume 4, pages 25- 29.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08771"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. arXiv preprint arXiv:1804.08771.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A Structured Review of the Validity of BLEU",
"authors": [
{
"first": "Ehud",
"middle": [],
"last": "Reiter",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "3",
"pages": "393--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ehud Reiter. 2018. A Structured Review of the Validity of BLEU. Computational Linguistics, 44(3):393- 401.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Is it great or terrible? preserving sentiment in neural machine translation of Arabic reviews",
"authors": [
{
"first": "Hadeel",
"middle": [],
"last": "Saadany",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "24--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hadeel Saadany and Constantin Orasan. 2020. Is it great or terrible? preserving sentiment in neural ma- chine translation of Arabic reviews. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 24-37, Barcelona, Spain (Online). Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Qijun Tan, Markus Freitag, Dipanjan Das, and Ankur P. Parikh. 2020. Learning to evaluate translation beyond english: Bleurt submissions to the wmt metrics 2020 shared task",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Pu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hyung Won",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gehrmann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Amy Pu, Hyung Won Chung, Sebas- tian Gehrmann, Qijun Tan, Markus Freitag, Dipan- jan Das, and Ankur P. Parikh. 2020. Learning to evaluate translation beyond english: Bleurt submis- sions to the wmt metrics 2020 shared task.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic machine translation evaluation using source language inputs and cross-lingual language model",
"authors": [
{
"first": "Kosuke",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3553--3558",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.327"
]
},
"num": null,
"urls": [],
"raw_text": "Kosuke Takahashi, Katsuhito Sudoh, and Satoshi Naka- mura. 2020a. Automatic machine translation evalu- ation using source language inputs and cross-lingual language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 3553-3558, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Grammatical error correction using pseudo learner corpus considering learner's error tendency",
"authors": [
{
"first": "Yujin",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Satoru",
"middle": [],
"last": "Katsumata",
"suffix": ""
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "27--32",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-srw.5"
]
},
"num": null,
"urls": [],
"raw_text": "Yujin Takahashi, Satoru Katsumata, and Mamoru Ko- machi. 2020b. Grammatical error correction us- ing pseudo learner corpus considering learner's er- ror tendency. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics: Student Research Workshop, pages 27-32, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas. Behavior research methods",
"authors": [
{
"first": "Amy",
"middle": [
"Beth"
],
"last": "Warriner",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Kuperman",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Brysbaert",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "45",
"issue": "",
"pages": "1191--1207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 English lemmas. Behavior re- search methods, 45(4):1191-1207.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "BERTScore: Evaluating text generation with Bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with Bert. arXiv preprint arXiv:1904.09675.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Visualisation of Contextual Embeddings of Sentences in Example 4",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Absolute Pearson correlations with segment-level human judgements for the QE dataset",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Kendall correlations with segment-level human judgements for the QE dataset",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Absolute Pearson correlations with SAM Adjustment for the QE dataset",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Example w h</td><td>w r</td><td>S h</td><td>S r</td><td>p</td><td>C hr Score+SAM</td></tr><tr><td>3</td><td>him#a</td><td>not#r</td><td>0</td><td>-1.0</td><td>0.5</td><td>0.92 0.46</td></tr><tr><td>4</td><td colspan=\"6\">anger#n happiness#n -0.669 0.856 0.762 0.85 0.20</td></tr><tr><td colspan=\"4\">Figure 5: Kendall correlations with SAM adjustment for the</td><td/><td/><td/></tr><tr><td>QE dataset</td><td/><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "Calculating the SAM adjustment for Examples 3 and 4",
"num": null,
"type_str": "table"
}
}
}
}