ACL-OCL / Base_JSON /prefixN /json /nlppower /2022.nlppower-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:30:58.474122Z"
},
"title": "Language Invariant Properties in Natural Language Processing",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"addrLine": "Via Sarfatti 25",
"settlement": "Milan",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"addrLine": "Via Sarfatti 25",
"settlement": "Milan",
"country": "Italy"
}
},
"email": "[email protected]"
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bocconi University",
"location": {
"addrLine": "Via Sarfatti 25",
"settlement": "Milan",
"country": "Italy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Meaning is context-dependent, but many properties of language (should) remain the same even if we transform the context. For example, sentiment or speaker properties should be the same in a translation and original of a text. We introduce language invariant properties: i.e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms. Language invariant properties can be used to define novel benchmarks to evaluate text transformation methods. In our work we use translation and paraphrasing as examples, but our findings apply more broadly to any transformation. Our results indicate that many NLP transformations change properties. We additionally release a tool as a proof of concept to evaluate the invariance of transformation applications.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Meaning is context-dependent, but many properties of language (should) remain the same even if we transform the context. For example, sentiment or speaker properties should be the same in a translation and original of a text. We introduce language invariant properties: i.e., properties that should not change when we transform text, and how they can be used to quantitatively evaluate the robustness of transformation algorithms. Language invariant properties can be used to define novel benchmarks to evaluate text transformation methods. In our work we use translation and paraphrasing as examples, but our findings apply more broadly to any transformation. Our results indicate that many NLP transformations change properties. We additionally release a tool as a proof of concept to evaluate the invariance of transformation applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The progress in Natural Language Processing has bloomed in recent years, with novel neural models being able to beat the score of different benchmarks. However, current evaluation benchmarks often do not look at how properties of language vary when text is transformed or influenced by a change in context. For example, the meaning of a sentence is influenced by a host of factors, among them who says it and when: \"That was a sick performance\" changes meaning depending on whether a 16-yearold says it at a concert or a 76-year-old after the opera. 1 However, there are several properties of language that do (or should) not change when we transform a text (i.e., change the surface form of it to another text, see also Section 2). If the text was written by a 25-year-old female, it should not be perceived as written by an old man after we apply a paraphrasing algorithm. The same goes for other properties, like sentiment: A positive message like \"good morning!\", posted on social media, should be perceived as a positive message, even when it is translated into another language. 2 We refer to these properties that are unaffected by transformations as Language Invariant Properties (LIPs). LIPs preserve the semantics and pragmatic components of language. I.e., these properties are not affected by transformations applied to the text. For example, we do not expect a summary to change the topic of a sentence.",
"cite_spans": [
{
"start": 1085,
"end": 1086,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Paraphrasing, summarization, style transfer, and machine translation are all NLP transformation tasks that should respect LIPs. If they do not, it is a strong indication that the system is picking up on spurious signals and needs to be recalibrated. For example, machine translation should not change speaker demographics or sentiment, and paraphrasing should not change entailment or topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "But what happens if a transformation does violate invariants? Violating invariants is similar to breaking the cooperative principle (Grice, 1975 ): if we do it deliberately, we might want to achieve an effect. For example, Reddy and Knight (2016) showed how words can be replaced to obfuscate author gender, thereby protecting their identity. Style transfer can therefore be construed as a deliberate violation of LIPs. In most cases, though, violating a LIP will result in an unintended outcome or interpretation of the transformed text: for example, violating LIPs on sentiment will generate misunderstanding in the interpretation of messages. Any such violation might be a signal that models are not ready for production .",
"cite_spans": [
{
"start": 132,
"end": 144,
"text": "(Grice, 1975",
"ref_id": "BIBREF10"
},
{
"start": 223,
"end": 246,
"text": "Reddy and Knight (2016)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we suggest a novel type of evaluation benchmark based on LIPs. We release a tool as a proof of concept of how this methodology can be introduced into evaluation pipelines: we define the concept of LIPs, but also integrate Figure 1 : Author age is a Language Invariant Property (LIP). Translation system 1 fails to account for this and provides a translation that can give the wrong interpretation to the sentence. Translation system 2 is instead providing a more correct interpretation.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "insights from Hovy et al. (2020) , defining an initial benchmark to study LIPs in two of the most wellknown transformation tasks: machine translation and paraphrasing. We apply those principles more broadly to transformations in NLP as a whole.",
"cite_spans": [
{
"start": 14,
"end": 32,
"text": "Hovy et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions. We introduce LIPs: properties of language that should not change during a transformation. Our contribution also focuses on the proposal of an evaluation methodology for LIPs and the release of a Python application that can be used to test how well systems can preserve LIPs. 3 We believe that this contribution can help the community to work on benchmarking and understanding how properties change when text is transformed.",
"cite_spans": [
{
"start": 290,
"end": 291,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To use the concept of LIPs, we first need to make clear what we mean by it. We formally define LIPs and transformations below. Assume the existence of a set S of all the possible utterable sentences. Let us define A and B as subsets of S. These can be in the same or different languages. Now, let's define a mapping function t : A \u2192 B i.e., t(\u2022) is a transformation that changes the surface form of the text A into B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "A language property p is a function that maps elements of S to a set P of property values. p is invariant if and only if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "\u2200a \u2208 A p(a) = p(t(a)) = p(b)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "where b \u2208 B, and t(a) = b. I.e., if applying p(\u2022) to both an utterance and its transformation still maps to the same property. We do not provide an exhaustive list of these properties, but suggest to include at least meaning, topic, sentiment, speaker demographics, and logical entailment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "LIPs are thus based on the concept of text transformations. Machine translation (MT) is a salient example of a transformation and a prime example of a task for which LIPs are important. MT can be viewed as a transformation between two languages where the main fundamental LIP that should not be broken is meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "However, LIPs are not restricted to MT but have broader applicability, e.g., in style transfer. In that case, though, some context has to be defined. When applying a formal to polite transfer, this function is by definition not invariant anymore. Nonetheless, many other properties should not be influenced by this transformation. Finally, for paraphrasing, we have only one language, but we have the additional constraint that t(a) \u0338 = a. For summarization, the constraint instead is that len(t(a)) < len(a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "LIPs are also what make some tasks in language more difficult than others: for example, data augmentation (Feng et al., 2021) cannot be as easily implemented in text data as in image processing, since even subtle changes to a sentence can affect meaning and style. Changing the slant or skew of a photo will still show the same object, but for example word replacement easily breaks LIPs, since the final meaning of the final sentence and the perceived characteristics can differ. Even replacing a word with one that is similar can affect LIPs. For example, consider machine translation with a parallel corpus: \"the dogs are running\" can be paired with the translation \"I cani stanno correndo\" in Italian. If we were to do augmentation, replacing dogs with its hyperonym animals does not corrupt the overall meaning, as the new English sentence still entails all that is entailed by the old one. However, the Italian example is no longer a correct translation of the new sentence, since cani is not the word for animals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "LIPs are also part of the communication between speakers. The information encoded in a sentence uttered by one speaker contains LIPs that are important for efficient communication, as misunderstanding a positive comment as a negative one can create issues between communication partners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "Note that we are not interested in evaluating the quality of the transformation (e.g., the translation or paraphrase). There are many different metrics and evaluation benchmarks for that (BLEU, ROUGE, BERTscore etc.: Papineni et al., 2002; Lin, 2004; Zhang et al., 2020b) . Our analysis concerns another aspect of communication for which we wish to propose an initial benchmark.",
"cite_spans": [
{
"start": 217,
"end": 239,
"text": "Papineni et al., 2002;",
"ref_id": "BIBREF23"
},
{
"start": 240,
"end": 250,
"text": "Lin, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 251,
"end": 271,
"text": "Zhang et al., 2020b)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Invariant Properties",
"sec_num": "2"
},
{
"text": "There have been different works in NLP that have investigated issues arising from language technology (Hovy and Spruit, 2016; Blodgett et al., 2020; Bolukbasi et al., 2016; Gonen and Goldberg, 2019; Lauscher et al., 2020; Bianchi et al., 2021a; Dev et al., 2020; Sheng et al., 2019; . In our paper, we focus on issues that can arise from the usage of text transformation algorithms (for example, we will see examples of gender bias in transformation, inspired by (Hovy et al., 2020) , in Section 5) and we describe a method that can allow us to analyze them.",
"cite_spans": [
{
"start": 102,
"end": 125,
"text": "(Hovy and Spruit, 2016;",
"ref_id": "BIBREF13"
},
{
"start": 126,
"end": 148,
"text": "Blodgett et al., 2020;",
"ref_id": "BIBREF5"
},
{
"start": 149,
"end": 172,
"text": "Bolukbasi et al., 2016;",
"ref_id": "BIBREF6"
},
{
"start": 173,
"end": 198,
"text": "Gonen and Goldberg, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 199,
"end": 221,
"text": "Lauscher et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 222,
"end": 244,
"text": "Bianchi et al., 2021a;",
"ref_id": "BIBREF2"
},
{
"start": 245,
"end": 262,
"text": "Dev et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 263,
"end": 282,
"text": "Sheng et al., 2019;",
"ref_id": "BIBREF28"
},
{
"start": 463,
"end": 482,
"text": "(Hovy et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The idea that drives LIPs have spawned across different work in the NLP literature; For example, Poncelas et al. (2020) discuss the effect that machine translation can have on sentiment classifiers. At the same time, ideas of conserving meaning during style transfer are also presented in the work by Hu et al. (2020) . We propose LIPs as a term to give a unified view on the problem of preserving these properties during transformation.",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "Poncelas et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 301,
"end": 317,
"text": "Hu et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "LIPs share some notions with the checklist by Ribeiro et al. (2020) and the adversarial reliability checks by Tan et al. (2021) . However, LIPs evalu-ate how well fundamental properties of discourse are preserved in a transformation, the checklist is made to guide users in a fine-grained analysis of the model performance to better understand bugs in the applications with the use of templates. As we will show later, LIPs can be quickly tested to any new annotated dataset. Some of the checklist's tests, like Replace neutral words with other neutral words, can be seen as LIPs. The general idea of adversarial attacks, meanwhile, also requires LIPs to hold in order to work. Nonetheless, we think the frameworks are complementary.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "Tan et al. (2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "For ease of reading, we will use translation as an example of a transformation in the following. However, the concept can be applied to any of the transformations we mentioned above. We start with a set of original texts A to translate into a set of texts B. 4 We thus need a translation model t from the source language of A to a target language of B. To test the transformation wrt a LIP, A should be annotated with that language property of interest, this is our ground truth and we are going to refer to this asp(A). We also need a classifier for the LIP of interest, which serves as language property function p. For example, a LIP classifier could be a gender classifier that, given an input text, returns the inferred gender of the speaker. Here, we need one cross-lingual classifier, or two classifiers, one in the source and one in the target language. 5 Once we apply the translation, we can use the LIP classifier on the original data A and the new set of translated data B obtaining respectively, p(A) and p(B).",
"cite_spans": [
{
"start": 259,
"end": 260,
"text": "4",
"ref_id": null
},
{
"start": 862,
"end": 863,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarking Transformation Invariance",
"sec_num": "4"
},
{
"text": "We can then compare the difference between the distribution of the LIP in the original data and either prediction. I.e., we compare the differences in distribution ofp(A) \u2212 p(A) top(A) \u2212 p(B) to understand the effect of the transformations. We show a visual explanation on how to benchmark LIPs in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 306,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Benchmarking Transformation Invariance",
"sec_num": "4"
},
{
"text": "Note that we are not interested in the actual performance of the classifiers, but in the difference in performance on the two datasets. We observe two possible phenomena (as in Hovy et al. (2020) previously trained LIP classifiers. Trained on data coming from a similar distribution (i.e., we can also split the dataset to get this data) 1) If there is a classifier bias, both the predictions based on the original language and the predictions based on the translations should be skewed in the same direction with respect to the distribution in A. E.g., for gender classification, both classifiers predict a higher rate of male authors in the original and the translated text. 2) Instead, if there is a transformation bias, then the distribution of the translated predictions should be skewed in a different direction than the one based on the original language. E.g., the gender distribution in the original language should be less skewed than the gender ratio in the translation. Note that we assume that the LIP classifiers used for the source and one in the target language have similar biases; if this were not true and the classifiers had different biases phenomena 1) could be caused both by the bias in translations or bias in the models. This mostly depends on the quality of the classifiers, that has to be assessed before the evaluation of the LIPs.",
"cite_spans": [
{
"start": 177,
"end": 195,
"text": "Hovy et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmarking Transformation Invariance",
"sec_num": "4"
},
{
"text": "Here, we evaluate machine translation and paraphrasing as transformation tasks. Our first release of this benchmark tool contains the datasets from Hovy et al. (2020) , annotated with gender 6 and age categories, and the SemEval dataset from Mohammad et al. (2018) annotated with emotion recognition. Moreover, we include the English dataset from HatEval (Basile et al., 2019) contain-ing tweets for hate speech detection. These datasets come with training and test splits and we use the training data to train the LIP classifiers.",
"cite_spans": [
{
"start": 148,
"end": 166,
"text": "Hovy et al. (2020)",
"ref_id": "BIBREF11"
},
{
"start": 242,
"end": 264,
"text": "Mohammad et al. (2018)",
"ref_id": "BIBREF18"
},
{
"start": 355,
"end": 376,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Nonetheless, our benchmark can be easily extended with new datasets encoding other LIPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We use a subset of the dataset by Hovy et al. (2015) . It contains TrustPilot reviews in English, Italian, German, French, and Dutch with demographic information about the user's age and gender. Training data for the different languages consists of 5k samples (balanced for gender) and can be used to build the LIP classifiers. The dataset can be used to evaluate the LIPs AUTHOR-GENDER and AUTHOR-AGE.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "Hovy et al. (2015)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TrustPilot",
"sec_num": "4.2"
},
{
"text": "We use the English tweet data from HatEval (Basile et al., 2019) . We take the training and test set (3k examples). Each tweet comes with a value that indicates if the tweet contains hate speech. The dataset can be used to evaluate the LIP HATEFULNESS.",
"cite_spans": [
{
"start": 43,
"end": 64,
"text": "(Basile et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "HatEval",
"sec_num": "4.3"
},
{
"text": "We use the Affect in Tweets dataset (AiT) (Mohammad et al., 2018) , which contains tweets annotated with emotions. We reduce the number of possible classes by only keeping emotions in the set {fear, joy, anger, sadness} to allow for future comparisons with other datasets. We then map joy to positive and the other emotions to negative for deriving the sentiment following Bianchi et al. (2021b . The data we collected comes in English (train: 4, 257, test: 2, 149) and Spanish (train: ",
"cite_spans": [
{
"start": 42,
"end": 65,
"text": "(Mohammad et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 373,
"end": 394,
"text": "Bianchi et al. (2021b",
"ref_id": "BIBREF3"
},
{
"start": 428,
"end": 446,
"text": "English (train: 4,",
"ref_id": null
},
{
"start": 447,
"end": 451,
"text": "257,",
"ref_id": null
},
{
"start": 452,
"end": 460,
"text": "test: 2,",
"ref_id": null
},
{
"start": 461,
"end": 465,
"text": "149)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Affects in Tweets (AiT)",
"sec_num": "4.4"
},
{
"text": "Method L1 L2 KL A,p(A) KL B,p(B) Distp(A) Dist p(A) Dist p(B) SE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Affects in Tweets (AiT)",
"sec_num": "4.4"
},
{
"text": "Classifiers As default classifier we use L2regularized Logistic Regression models over 2-6 TF-IDF character-grams (Hovy et al., 2020) . Due to the great recent results of pre-trained language models (Nozza et al., 2020) , we also use SBERT (Reimers and Gurevych, 2019) to generate sentence embeddings and use these representations as input to a logistic regression (L2 regularization and balance weights). The two classification methods are referred to as TF (TF-IDF) and SE (Sentence Embeddings). Our framework supports the use of any classifiers. The advantage of this setup is that it is generally fast to set up, but interested user can also use more complex transformer models. The replicability details appear in the Appendix.",
"cite_spans": [
{
"start": 114,
"end": 133,
"text": "(Hovy et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 199,
"end": 219,
"text": "(Nozza et al., 2020)",
"ref_id": null
},
{
"start": 240,
"end": 268,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.5"
},
{
"text": "Scoring Standard metrics for classification evaluation can be used to assess how much LIPs are preserved during a transformation. Following Hovy et al. (2020) we use the KL divergence to compute the distance -in terms of the distribution divergence -between the two predicted distributions. The benchmark also outputs the X 2 test to assess if there is a significant difference in the predicted distributions. It is also possible to look at the plots of the distribution to understand the effects of the transformations (see following examples in Figures 3, 4 and 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 547,
"end": 566,
"text": "Figures 3, 4 and 5)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Methods",
"sec_num": "4.5"
},
{
"text": "We evaluate four tasks, i.e., combinations of transformations and LIPs; the combination is determined by the availability of the particular property in the respective dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "We use the TrustPilot dataset to study the authorgender LIP during translation. We use the Google translated documents provided by the authors. We are essentially recomputing the results that appear in the work by Hovy et al. (2020) . As shown in Table 1 , our experiments with both TF and SE confirm the one in the paper: it is easy to see that the translations from both Italian and German into English are more likely to be predicted as male (this can be seen by the change in the distribution), breaking the LIP AUTHOR-GENDER. We use the AiT dataset to test the sentiment LIP during translation. We translate the tweets from Spanish to English using DeepL. We use SE as our embedding method. As shown in Figure 3 , SENTIMENT is a LIP that seems to be easily kept during translations. This is expected, as sentiment is a fundamental part of the meaning of a sentence and has to be translated accordingly.",
"cite_spans": [
{
"start": 214,
"end": 232,
"text": "Hovy et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 1",
"ref_id": null
},
{
"start": 708,
"end": 716,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "TrustPilot Translation -LIP: AUTHOR-GENDER",
"sec_num": "5.1"
},
{
"text": "When we apply paraphrasing on the TrustPilot data, the SE classifier on the transformed data predicts more samples as male (see Figure 4 that plots the distribution). KL A,p(B) = 0.018, difference sig- ",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 136,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "TrustPilot Paraphrasing -LIP: AUTHOR-GENDER",
"sec_num": "5.3"
},
{
"text": "We use the HatEval data to study the hatefulness LIP after paraphrasing. We use SE as our embedding method. Figure 5 shows that the SE classifier predicted a high amount of hateful tweets in p(A) (a problem due to the differences between the training and the test in HatEval (Basile et al., 2019; Nozza, 2021) ), this number is drastically reduced in p(B), suggesting that paraphrasing reduces hatefulness, breaking the LIP. As an example of paraphrased text, Savage Indians living up to their reputation was transformed to Indians are living up to their reputation. While the message still internalizes bias, removing the term Savage has reduced its strength. It is important to remark that we are not currently evaluating the quality of the transformation-that is another task. The results we obtain are in part due to the paraphrasing tool we used, 7 but they still indicate a limit in the model capabilities.",
"cite_spans": [
{
"start": 275,
"end": 296,
"text": "(Basile et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 297,
"end": 309,
"text": "Nozza, 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "HatEval Paraphrasing -LIP: HATEFULNESS",
"sec_num": "5.4"
},
{
"text": "We release an extensible benchmark tool 8 that can be used to quickly assess a model's capability to handle LIPs. The benchmark has been designed to provide a high-level API that can be integrated into any transformation pipeline. Users can access the dataset text, transform, and score it (see Figure 6) . Thus, this pipeline should be very easy to use. The tool allows the users to run the experiments multiple time to also understand the variations that depends on the model themselves. Figure 6 : The benchmark has been designed to provide a high-level API that can be integrated in any transformation pipeline. Users can access the dataset text, transform, and score it.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 304,
"text": "Figure 6)",
"ref_id": null
},
{
"start": 490,
"end": 498,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmark Tool",
"sec_num": "6"
},
{
"text": "We hope this benchmark tool can be of help, even as an initial prototype, in designing evaluation pipelines meant at studying how LIPs are preserved in text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Benchmark Tool",
"sec_num": "6"
},
{
"text": "This paper introduces the concept of Language Invariant Properties, properties in language that should not change during transformations. We also describe a possible evaluation pipeline for LIPs showcasing that some of the language transformation technologies we use suffer from limitations and that they cannot often preserve important LIPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We believe that the study of LIPs can improve the performance of different NLP tasks and to provide better support in this direction we release a benchmark that can help researchers and practitioners understand how well their models handle LIPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The tool we implemented comes with some limitations. We cannot completely remove the learned bias in the classifiers and we always assume that when there are two classifiers, these two perform reliably well on both languages so that we can compare the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "8"
},
{
"text": "To reduce one of the possible sources of bias, the classifiers are currently trained with data coming from a similar distribution to the one used at test time, ideally from the same collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "8"
},
{
"text": "We are aware that our work assumes that it is easy to classify text in different languages even when considering cultural differences and we do not aim to ignore this. We know that cultural differences can make it more difficult to preserve LIPs (Hovy and Yang, 2021): it might not be possible to effectively translate a positive message into a language that does not share the same appreciation/valence for the same things. However, we also believe this is a more general limitation of machine translation. The speaker's intentions are to keep the message consistent -in terms of LIPs -even when translated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical Considerations",
"sec_num": null
},
{
"text": "Example due to Veronica Lynn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gu.com/technology/2017/oct/ 24/facebook-palestine-israel-translate s-good-morning-attack-them-arrest",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/MilaNLProc/langua ge-invariant-properties",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We slightly abuse of notation here and interpret A has the set of original texts instead of the set of the possible utterances.5 For all other transformations, which stay in the same language, we only need one classifier. (Paraphrasing or summarization can be viewed as a transformation from English to English).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The dataset comes with binary gender, but this is not an indication of our views or the capabilities of the tool.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/tuner007/pega sus_paraphrase 8 This will be a link to a GitHub Repo",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://deepl.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Giovanni Cassani and Amanda Curry for the comments on an early draft of this work. This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR), and by Fondazione Cariplo (grant No. 2020-4288, MONICA). The authors are members of the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We use a 5 fold cross-validation on the training data to select the best parameters of the logisitic regression. Class weights are balanced and we use L2 Regularization. We search the best C value in [5.0, 2.0, 1.0, 0.5, 0.1]. The solver used is saga.When using TF-IDF we use the following parameters: ngram range=(2, 6), sublinear tf=True, min df=0.001, max df=0.8.Nevertheless, the tool we will share will contain all the parameters (the tool is versioned, so it is easy to track the changes and check which parameters have been used to run the experiments).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Logistic Regression Setup",
"sec_num": null
},
{
"text": "We use the same classifier for the original and the transformed text. We generate the representations with SBERT. The model used is paraphrasedistilroberta-base-v2. 9As paraphrase model, we use a fine-tuned Pegasus (Zhang et al., 2020a) model, pegasus paraphrase, 10 that at the time of writing is one of the most downloaded on the HuggingFace Hub. 9 https://sbert.net 10 https://huggingface.co/tuner007/pega sus_paraphrase",
"cite_spans": [
{
"start": 215,
"end": 236,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 TrustPilot Paraphrase",
"sec_num": null
},
{
"text": "We translated the tweets using the DeepL APIs. 11 As classifiers we use the cross-lingual model for both languages, each language has its languagespecific classifier. The cross-lingual sentence embedding method used is paraphrase-multilingualmpnet-base-v2, from the SBERT package.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 AiT Translation",
"sec_num": null
},
{
"text": "As translation we use the already translated sentences from the TrustPilot dataset provided by Hovy et al. (2020) . We use both the TF-IDF based and the cross-lingual classifier, as shown in Table 1 , each language has its own languagespecific classifier. The cross-lingual sentence embedding method used is paraphrase-multilingualmpnet-base-v2, from the SBERT package.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "Hovy et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "B.3 TrustPilot Translation",
"sec_num": null
},
{
"text": "We use the same classifier for the original and the transformed text. We generate the representations with SBERT. The model used is paraphrasedistilroberta-base-v2. Users are replaced with @user, hashtags are removed.As paraphrase model, we use a fine-tuned Pegasus (Zhang et al., 2020a) model, pegasus paraphrase, that at the time of writing is one of the most downloaded on the HuggingFace Hub.",
"cite_spans": [
{
"start": 266,
"end": 287,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "B.4 HatEval Paraphrasing",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter",
"authors": [
{
"first": "Valerio",
"middle": [],
"last": "Basile",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Francisco Manuel Rangel",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Sanguinetti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 13th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {
"DOI": [
"10.18653/v1/S19-2007"
]
},
"num": null,
"urls": [],
"raw_text": "Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 54-63, Min- neapolis, Minnesota, USA. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the gap between adoption and understanding in NLP",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "3895--3901",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.340"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi and Dirk Hovy. 2021. On the gap be- tween adoption and understanding in NLP. In Find- ings of the Association for Computational Linguis- tics: ACL-IJCNLP 2021, pages 3895-3901, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SWEAT: Scoring polarization of topics across different corpora",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Nicoli",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Palmonari",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "10065--10072",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.788"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Marco Marelli, Paolo Nicoli, and Mat- teo Palmonari. 2021a. SWEAT: Scoring polarization of topics across different corpora. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10065-10072, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FEEL-IT: Emotion and sentiment classification for the Italian language",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Debora Nozza, and Dirk Hovy. 2021b. FEEL-IT: Emotion and sentiment classification for the Italian language. In Proceedings of the Eleventh Workshop on Computational Approaches to Subjec- tivity, Sentiment and Social Media Analysis, pages 76-83, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "XLM-EMO: Multilingual Emotion Prediction in Social Media Text",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Debora Nozza, and Dirk Hovy. 2022. XLM-EMO: Multilingual Emotion Prediction in So- cial Media Text. In Proceedings of the 12th Work- shop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language (technology) is power: A critical survey of \"bias\" in NLP",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.485"
]
},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in NLP. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5454- 5476, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [
"Y"
],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam Tauman",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "4349--4357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam Tauman Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Ad- vances in Neural Information Processing Systems 29: Annual Conference on Neural Information Process- ing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 4349-4357.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On measuring and mitigating biased inferences of word embeddings",
"authors": [
{
"first": "Sunipa",
"middle": [],
"last": "Dev",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jeff",
"middle": [
"M"
],
"last": "Phillips",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "7659--7666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunipa Dev, Tao Li, Jeff M. Phillips, and Vivek Sriku- mar. 2020. On measuring and mitigating biased in- ferences of word embeddings. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7659-7666. AAAI Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Teruko Mitamura, and Eduard Hovy. 2021. A survey of data augmentation approaches for NLP",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Sarath",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vosoughi",
"suffix": ""
}
],
"year": null,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "968--988",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.84"
]
},
"num": null,
"urls": [],
"raw_text": "Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chan- dar, Soroush Vosoughi, Teruko Mitamura, and Ed- uard Hovy. 2021. A survey of data augmentation approaches for NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 968-988, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Workshop on Widening NLP",
"volume": "",
"issue": "",
"pages": "60--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Workshop on Widening NLP, pages 60-63, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Logic and conversation",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Grice",
"suffix": ""
}
],
"year": 1975,
"venue": "Syntax and semantics. 3: Speech acts",
"volume": "",
"issue": "",
"pages": "41--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Grice. 1975. Logic and conversation. In Syntax and semantics. 3: Speech acts, pages 41-58. New York: Academic Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "you sound just like your father\" commercial machine translation systems include stylistic biases",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Fornaciari",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1686--1690",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.154"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. \"you sound just like your father\" commer- cial machine translation systems include stylistic bi- ases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1686-1690, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "User review sites as a resource for large-scale sociolinguistic studies",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15",
"volume": "",
"issue": "",
"pages": "452--461",
"other_ids": {
"DOI": [
"10.1145/2736277.2741141"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Anders Johannsen, and Anders S\u00f8gaard. 2015. User review sites as a resource for large-scale sociolinguistic studies. In Proceedings of the 24th International Conference on World Wide Web, WWW '15, page 452-461. International World Wide Web Conferences Steering Committee.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The social impact of natural language processing",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Shannon",
"middle": [
"L"
],
"last": "Spruit",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "591--598",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2096"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591-598, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The importance of modeling social factors of language: Theory and practice",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "588--602",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.49"
]
},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy and Diyi Yang. 2021. The importance of modeling social factors of language: Theory and practice. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 588-602, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text style transfer: A review and experimental evaluation",
"authors": [
{
"first": "Zhiqiang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Roy Ka-Wei",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Charu",
"suffix": ""
},
{
"first": "Aston",
"middle": [],
"last": "Aggarwal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.12742"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiqiang Hu, Roy Ka-Wei Lee, Charu C Aggarwal, and Aston Zhang. 2020. Text style transfer: A re- view and experimental evaluation. arXiv preprint arXiv:2010.12742.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A general framework for implicit and explicit debiasing of distributional word vector spaces",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "8131--8138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Goran Glava\u0161, Simone Paolo Ponzetto, and Ivan Vuli\u0107. 2020. A general framework for im- plicit and explicit debiasing of distributional word vector spaces. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, pages 8131-8138.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "SemEval-2018 task 1: Affect in tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of The 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "1--17",
"other_ids": {
"DOI": [
"10.18653/v1/S18-1001"
]
},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 1-17, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exposing the limits of zero-shot cross-lingual hate speech detection",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "907--914",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-short.114"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza. 2021. Exposing the limits of zero-shot cross-lingual hate speech detection. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 907-914, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Making Sense of Language-Specific BERT Models",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.02912"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2020. What the [MASK]? Making Sense of Language-Specific BERT Models. arXiv preprint arXiv:2003.02912.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "HONEST: Measuring hurtful sentence completion in language models",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2398--2406",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.191"
]
},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, and Dirk Hovy. 2021. \"HONEST: Measuring hurtful sentence completion in language models\". In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2398-2406, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Measuring Harmful Sentence Completion in Language Models for LGBTQIA+ Individuals",
"authors": [
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2022,
"venue": "Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Debora Nozza, Federico Bianchi, Anne Lauscher, and Dirk Hovy. 2022. Measuring Harmful Sentence Com- pletion in Language Models for LGBTQIA+ Individ- uals. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and In- clusion. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The impact of indirect machine translation on sentiment classification",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Pintu",
"middle": [],
"last": "Lohar",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hadley",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "78--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Pintu Lohar, James Hadley, and Andy Way. 2020. The impact of indirect machine transla- tion on sentiment classification. In Proceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), pages 78-88, Virtual. Association for Ma- chine Translation in the Americas.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Obfuscating gender in social media writing",
"authors": [
{
"first": "Sravana",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Workshop on NLP and Computational Social Science",
"volume": "",
"issue": "",
"pages": "17--26",
"other_ids": {
"DOI": [
"10.18653/v1/W16-5603"
]
},
"num": null,
"urls": [],
"raw_text": "Sravana Reddy and Kevin Knight. 2016. Obfuscating gender in social media writing. In Proceedings of the First Workshop on NLP and Computational Social Science, pages 17-26, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The woman worked as a babysitter: On biases in language generation",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Premkumar",
"middle": [],
"last": "Natarajan",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3407--3412",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1339"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3407- 3412, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Reliability testing for natural language processing systems",
"authors": [
{
"first": "Samson",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Baxter",
"suffix": ""
},
{
"first": "Araz",
"middle": [],
"last": "Taeihagh",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"A"
],
"last": "Bennett",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "4153--4169",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.321"
]
},
"num": null,
"urls": [],
"raw_text": "Samson Tan, Shafiq Joty, Kathy Baxter, Araz Taeihagh, Gregory A. Bennett, and Min-Yen Kan. 2021. Relia- bility testing for natural language processing systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 4153-4169, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "PEGASUS: pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 37th International Conference on Machine Learning",
"volume": "2020",
"issue": "",
"pages": "11328--11339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. PEGASUS: pre-training with extracted gap-sentences for abstractive summariza- tion. In Proceedings of the 37th International Con- ference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119, pages 11328- 11339. PMLR.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Bertscore: Evaluating text generation with BERT",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Eval- uating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "A visual explanation on how to benchmark LIPs."
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Translation ES-EN on AiT sentiment analysis. Translation respects the LIP SENTIMENT"
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Paraphrasing on HatEval English data. Paraphrasing breaks the LIP HATEFULNESS nificant for X 2 with p < 0.01, resulting in a break of the LIP HATEFULNESS."
}
}
}
}