ACL-OCL / Base_JSON /prefixB /json /bppf /2021.bppf-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:08:08.996888Z"
},
"title": "Guideline Bias in Wizard-of-Oz Dialogues",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Petr\u00e9n",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Bach",
"middle": [],
"last": "Hansen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"country": "Denmark"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "NLP models struggle with generalization due to sampling and annotator bias. This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated. We examine two recently introduced dialogue datasets, CCPE-M and Taskmaster-1, both collected by trained assistants in a Wizard-of-Oz setup. For CCPE-M, we show how a simple lexical bias for the word like in the guidelines biases the data collection. This bias, in effect, leads to poor performance on data without this bias: a preference elicitation architecture based on BERT suffers a 5.3% absolute drop in performance, when like is replaced with a synonymous phrase, and a 13.2% drop in performance when evaluated on out-of-sample data. For Taskmaster-1, we show how the order in which instructions are presented, biases the data collection.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "NLP models struggle with generalization due to sampling and annotator bias. This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated. We examine two recently introduced dialogue datasets, CCPE-M and Taskmaster-1, both collected by trained assistants in a Wizard-of-Oz setup. For CCPE-M, we show how a simple lexical bias for the word like in the guidelines biases the data collection. This bias, in effect, leads to poor performance on data without this bias: a preference elicitation architecture based on BERT suffers a 5.3% absolute drop in performance, when like is replaced with a synonymous phrase, and a 13.2% drop in performance when evaluated on out-of-sample data. For Taskmaster-1, we show how the order in which instructions are presented, biases the data collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sample bias is a well-known problem in NLP -discussed from Marcus (1982) to Barrett et al. (2019) -and annotator bias has been discussed as far back as Ratnaparkhi (1996) . This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated.",
"cite_spans": [
{
"start": 59,
"end": 72,
"text": "Marcus (1982)",
"ref_id": "BIBREF16"
},
{
"start": 76,
"end": 97,
"text": "Barrett et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 152,
"end": 170,
"text": "Ratnaparkhi (1996)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Annotation guidelines are used to train annotators, and guidelines are therefore in some sense intended to and designed to prime annotators. What we will refer to in our discussion of guideline bias, is rather the unintended biases that result from how guidelines are formulated, and the examples used in those guidelines. If a treebank annotation guideline focuses overly on parasitic gap constructions, for example, inter-annotator agreement may be higher on those, and annotators may be biased to annotate similar phenomena by analogy with parasitic gaps. Figure 1: The percentage of sentences with the word like in the CCPE-M annotation guidelines (Guidelines), the suggested questions to ask users, in the guidelines (Suggestions), (c) the actual first turns by the assistants (1st turn), and (d) the actual replies by the users (2nd turn). In all cases, more than half of the sentences contain the word like.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on two recently introduced datasets, the Coached Conversational Preference Elicitation corpus (CCPE-M) from Radlinski et al. (2019) , related to the task of conversational recommendation (Christakopoulou et al., 2016; Li et al., 2018) , and Taskmaster-1 , which is a multipurpose, multi-domain dialogue dataset. CCPE-M consists of conversations about movie preferences, and the part of Taskmaster-1, we focus on here, conversations about theatre ticket reservations. Both corpora were collected by having a team of assistants interact with users in a Wizard-of-Oz (WoZ) set-up, i.e. a human plays the role of a digital assistant which engages a user in a conversation about their movie preferences. The assistants were given a set of guidelines in advance, as part of their training, and it is these guidelines that induce biases. In CCPE-M, it is the overwhelming use of the verb like (see Figure 5 ) and its trickle-down effects, we focus on; in Taskmaster-1, the order of the instructions. In fact, the CCPE-M guidelines consist of 324 words, of which 20 (6%) are inflections or derivations of the lemma like: As shown in Figure 5 in the Appendix, more than 50% of the sentences in the guidelines include forms of like! This very strong bias in the guidelines has a clear downstream effect on the assistants that are collecting the data. In their first dialogue turn, the assistants use the word like in 72% of the dialogues. This again biases the users responding to the assistants in the WoZ set-up: In 58% of their first turns, given that the assistant uses a form of the word like, they also use the verb like. We show that this bias leads to overly optimistic estimates of performance. Additionally, we also demonstrate how the guideline affects the user responses through a controlled priming experiment. For Taskmaster-1, we show a similar effect of the guidelines on the collected dialogues.",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "Radlinski et al. (2019)",
"ref_id": "BIBREF18"
},
{
"start": 196,
"end": 226,
"text": "(Christakopoulou et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 227,
"end": 243,
"text": "Li et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 900,
"end": 908,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1134,
"end": 1142,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Contributions We introduce the notion of guideline bias and present a detailed analysis of guideline bias in two recently introduced dialogue corpora (CCPE-M and Taskmaster-1). Our main experiments focus on CCPE-M: We show how a simple bias toward the verb like easily leads us to overestimate performance in the wild by showing performance drops on semantically innocent perturbations of the test data, as well as on a new sample of movie preference elicitations that we collected from Reddit for the purpose of this paper. We also show that debiasing the data, improves performance. The CCPE-M provides a very clear example of guideline bias, but other examples can be found, e.g., in Taskmaster-1, which we discuss in \u00a73. We discuss more examples in \u00a74.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first examine the CCPE-M dataset of spoken dialogues about movie preferences. The dialogues in CCPE-M are generated in a Wizard-of-Oz set-up, where the assistants type their input, which is then translated into speech using text-to-speech technologies, at which point users respond by speech. The dialogues were transcribed and annotated by the authors of Radlinski et al. (2019) .",
"cite_spans": [
{
"start": 359,
"end": 382,
"text": "Radlinski et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in CCPE-M",
"sec_num": "2"
},
{
"text": "We frame the CCPE-M movie preference detection problem as a sentencelevel classification task. If a sentence contains a labeled span, we let this label percolate to the sentence level and be a label of the entire sentence. If a sentence contains multiple unique label spans the sentence is assigned the leftmost label. A sentencelevel label should therefore be interpreted as saying in this sentence, the user elicits a movie or genre preference. Our resulting sentence classification dataset contains five different preference labels, including a NONE label. We shuffle the data at the dialogue-level and divide the dialogues into training/development/test splits using a 80/10/10 ratio, ensuring sentences from the same dialogue will not end up in both training and test data. As the assistants utterances rarely express any preferences, we only include the user utterances to balance the number of negative labels. See Table 2 for statistics regarding the label distribution.",
"cite_spans": [],
"ref_spans": [
{
"start": 922,
"end": 929,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "Perturbations of test data In order to analyse the effects of guideline bias in the CCPE-M dataset, we introduce perturbations of the instances in the test set where like occurs, replacing like with a synonymous word, e.g. love, or paraphrase, e.g. holds dearly. We experiment with four different replacements for like: (i) love, (ii) was incredibly affected by, (iii) have as my all time favorite movie and (iv) am out of this world passionate about. See Figure 2 for an example sentence and its perturbed variants. The perturbations occasionally, but rarely, lead to grammatically incorrect input. 1 We emphasize that even though we increase the length of the sentence, the phrases we replace like with should signal an even stronger statement of preference, which models should be able to pick up on. Since our data consists of informal speech it includes adverbial uses of like; we only replace verb occurrences, relying on SpaCy's POS tagger. 2 We replace 219 instances of the verb like throughout the test set.",
"cite_spans": [
{
"start": 600,
"end": 601,
"text": "1",
"ref_id": null
},
{
"start": 948,
"end": 949,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 456,
"end": 464,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "Perturbations of train data We also augment the training data to create a less biased resource. Here we adopt a slightly different strategy, also to evaluate a model trained on the debiased training data to the above perturbed test data: We use six paraphrases of the verb like listed in a publicly available thesaurus, 3 none of which overlap with the words used to perturb the test data, and randomly replace verbal like with a probability of 20%. The paraphrases are sampled from a uniform distribution. A total of 401 instances are replaced in the training data using this approach. This is not intended as a solution to guideline bias, but in our experiments below, we show that a model trained on this simple, debiased dataset generalizes better to out of sample data, showing that the bias toward like was in fact one of the reasons that our baseline classifier performed poorly in this domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "Reddit movie preference dataset In addition to the perturbed CCPE-M dataset, we also collect and annotate a challenge dataset from Reddit threads discussing movies for the purpose of preference elicitation. The comments are scraped from Reddit threads with titles such as 'Here's A Simple Question. What's Your Favorite Movie Genre And Why?' or 'What's a movie that you love that everyone else hates?' and mostly consist of top-level comments. These top-level comments typically respond directly the question posed by the thread, and explicitly state preferences. We also include some random samples from discussion trees that contain no preferences, to balance the label distribution slightly. In this data, we observe the word like, but less frequently: The verb like occurred in 15/211 examples. The data is annotated at the sentence level, as described previously, and we follow the methodology described by Radlinski et al. (2019) and identify anchor items such as names of movies or series, genres or categories and then label each sentence according to the preference statements describing said item, if any. The dataset contains roughly 100 comments, that when divided into individual sentences resulting in 211 datapoints. The statistics can be found in the final column of Table 2 . We make the data publicly available. 4",
"cite_spans": [
{
"start": 912,
"end": 935,
"text": "Radlinski et al. (2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1283,
"end": 1291,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "Results We evaluate the performance on two different models on the original and perturbed CCPE-M, as well as on our Reddit data: (i) a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) sentence classifier, trained only on CCPE-M, including the embeddings, and (ii) a fine-tuned BERT sentence classification model (Devlin et al., 2018) . For (i), we use two BiLSTM layers (d = 128), randomly initialized embeddings (d = 64), and a dropout rate of 0.5. The model is trained for 45 epochs. For (ii), we use the base, uncased BERT model with the default parameters and finetune for 3 epochs. Model selection is conducted based on performance on the development set. Performance is measured using class-weighted F 1 score. We report results in Table 1 on the various perturbation test sets as well as the Reddit data, when (i) the models are trained on the unchanged CCPE-M data, and (ii) the models are trained on the debiased version CCPE-M thesaurus .",
"cite_spans": [
{
"start": 154,
"end": 188,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF12"
},
{
"start": 317,
"end": 338,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 743,
"end": 750,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "On the original dataset, BERT performs slightly better than the BiLSTM architecture, but the differences are relatively small. Both BiLSTM and BERT suffer a drop in performance, when examples are perturbed and the word like is replaced with synonymous words or phrases. Note how longer substitutions result in a larger drop in performance, e.g. love vs. am out of this world passionate about. We see the drops follow the same pattern for both architectures, while BiLSTM seems a bit more sensitive to our test permutations. Both models do even worse on our newly collected Reddit data. Here, we clearly see the sensitivity of the BiLSTM architecture, which suffers a 30% absolute drop in F 1 ; but even BERT suffers a bit performance drop of more than 13%, when evaluated on a new sample of data. When training on CCPE-M thesaurus , both models become more invariant to our perturbations,with up to 4.5 F 1 improvements for BERT model and 3 F 1 improvements for the BiLSTM, without any loss of performance on the original test set. We also observe improvements on our collected Reddit data, suggesting that the initial drop in performance can be partially explained by guideline bias and not only domain differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "Controlled priming experiment To establish the priming effect of guidelines in a more controlled setting, we set up a small crowdsourced experiment. We asked turkers to respond to a hypothetical question about movie preferences. For example, turkers were asked to imagine they are in a situation in which they 'are asked what movies' they 'like', and that they like a specific movie, say Harry Potter. The turker may then respond: I've always liked Harry Potter. We collected 40 user responses for each of the priming verbs like, love and prefer, 120 total, and for each of the verbs used to prime the turkers, we compute a probability distribution over most of the verbs in the response vocabulary that are likely to be used to describe a general preference towards something. Figure 3 shows the results of the crowdsourced priming experiments. We can observe that when a specific priming word, such as like, is used, there is a significantly higher probability that the response from the user will contain that same word, illustrating that when keywords in guidelines are heavily overrepresented, the collected data will also reflect this bias. Figure 3: Probability that a verb that describes a preference towards a movie is mentioned, given a priming word by the annotator is mentioned.",
"cite_spans": [],
"ref_spans": [
{
"start": 778,
"end": 787,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence classification",
"sec_num": null
},
{
"text": "The order in which the goals of the conversation is described to annotators in the guidelines can also bias the order in which these goals are pursued in conversation. Taskmaster-1 contains conversations between a user and an agent where the user seeks to accomplish a goal by, e.g., booking tickets to a movie, which is the domain we focus on. When booking tickets to go see a movie, we can specify the movie title before the theatre, or vice versa, but models may not become robust to such variation if exposed to very biased examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias in Taskmaster-1",
"sec_num": "3"
},
{
"text": "Unlike CCPE-M, the Taskmaster-1 dataset was (wisely) collected using two different sets of guidelines to reduce bias, and we can therefore investigate the downstream effects of of the bias induced by the two sets of guidelines. To quantify the guideline bias, we compute the probability that a goal x 1 is mentioned before another one x 2 in an dialogue, given that x 1 precedes x 2 in the guidelines. We only consider dialogues where all goals are mentioned at least once, i.e., \u223c 900 in total; the conversations are then divided into two, based on the guideline that was used. Figure 4 shows the heat map of these relative probabilities. The guidelines have a clear influence on the final structure of the conversation, i.e. if the movie title (x 1 ) is mentioned before the city (x 2 ) in the guideline, there is a high probability (0.75) that the same is true in the dialogues. If they are not, the probability is much lower (0.57).",
"cite_spans": [],
"ref_spans": [
{
"start": 579,
"end": 587,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Bias in Taskmaster-1",
"sec_num": "3"
},
{
"text": "Plank et al. (2014) present an approach to correcting for adjudicator biases. Bender and Friedman (2018) raise the possibility of (demographic) bias in annotation guidelines, but do not provide a means for detecting such biases or show any existing datasets to be biased in this way. Amidei et al. (2018) also discuss the possibility, but in a footnote. Geva et al. (2019) investigates how crowdsourcing practices can introduce annotator biases in NLU datasets and therefore result in models overestimating confidence on samples from annotators that have contributed to both the training and test sets. Liu et al. (2018) , on the other hand, discuss a case in which annotation guidelines are biased by being developed for a particular domain and not easily applicable to another. Cohn and Specia (2013) explores how models can learn from annotator bias in a somewhat opposite scenario from ours, e.g. when annotators deviate from annotation guidelines and inject their own bias into the data, and by using multi-task learning to train annotator specific models, they improve performance by leveraging annotation (dis)agreements. There are, to the best of our knowledge, relatively few examples of researchers identifying concrete guideline-related bias in benchmark datasets: Dickinson (2003) suggest that POS annotation in the English Penn Treebank is biased by the vagueness of the annotation guidelines in some respects. Friedrich et al. (2015) report a similar guideline-induced bias in the ACE datasets. Dandapat et al. (2009) discuss an interesting bias in a Bangla/Hindi POS-annotated corpus arising from a decision in the annotation guidelines to include two labels for when annotators were uncertain, but not specifying in detail how these labels were to be used. Goldberg and Elhadad (2010) define structural bias for dependency parsing and how it can be attributed to bias in individual datasets, among other factors, originating from their annotation schemes. Ibanez and Ohtani (2014) report a similar case, where ambiguity in how special categories were defined, led to bias in a corpus of Spanish learner errors.",
"cite_spans": [
{
"start": 284,
"end": 304,
"text": "Amidei et al. (2018)",
"ref_id": "BIBREF0"
},
{
"start": 354,
"end": 372,
"text": "Geva et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 603,
"end": 620,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 780,
"end": 802,
"text": "Cohn and Specia (2013)",
"ref_id": "BIBREF5"
},
{
"start": 1276,
"end": 1292,
"text": "Dickinson (2003)",
"ref_id": "BIBREF8"
},
{
"start": 1424,
"end": 1447,
"text": "Friedrich et al. (2015)",
"ref_id": "BIBREF9"
},
{
"start": 1509,
"end": 1531,
"text": "Dandapat et al. (2009)",
"ref_id": "BIBREF6"
},
{
"start": 1773,
"end": 1800,
"text": "Goldberg and Elhadad (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "In this work, we examined guideline bias in two newly presented WoZ style dialogue corpora: We showed how a lexical bias for the word like in the annotation guidelines of CCPE-M, through a controlled priming experiment leads to a bias for this word in the dialogues, and that models trained on this corpus are sensitive to the absence of this verb. We provided a new test dataset for this task, collected from Reddit, and show how a debiased model performs better on this dataset, suggesting the 13% drop is in part the result of guideline bias. We showed a similar bias in Taskmaster-1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion & Conclusion",
"sec_num": "5"
},
{
"text": "Our models are generally robust to such variation, and, as we will see in our experiments below, the perturbations are less harmful than collecting a new sample of evaluation data and evaluating your model on this sample.2 https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://thesaurus.com. The paraphrases consists of: (1) derive pleasure from, (2) get a kick out of, (3) appreciate, (4) take an interest in, (5) cherish, (6) find appealing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/vpetren/guideline_ bias",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was funded by the Innovation Fund Denmark and Topdanmark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "General Instructions The goal of this type of dialogue is for you to get the users to explain their movie preferences: The KIND of movies they like and dislike and WHY. We really want to end up finding out WHY they like what they like movie AND why the DON'T like what they don't like. We want them to take lots of turns to explain these things to you.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Appendices",
"sec_num": null
},
{
"text": "We want users to discuss likes and dislikes for kinds of movies rather than just about specific movies. (But we trigger these more general preferences based on remembering certain titles.) You may bring up particular movie titles in order to get them thinking about why they like or dislike that kind of thing. Do not bring up particular directors, actors, or genres. For each session do the following steps:1. Start with a normal introduction: Hello. I'd like to discuss your movie preferences.2. Ask them what kind of movies they like and why they generally like that kind of movie.3. Ask them for a particular movie name they liked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Important",
"sec_num": null
},
{
"text": "Ask them what about that KIND of movie they liked.(get a couple of reasons at least -let them go on if they choose)5. Ask them to name a particular movie they did not like.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": ". Ask them what about that movie they did not like. (get a couple of reasons at least or let them go on if they choose)7. Now choose a movies using the movie generator link below. Ask them if they liked that movie (if they haven't seen it: (a) ask if they have heard of it. If so, ask if they would see it (b) then choose another that they have seen to ask about). Once you find a movie from the list they have seen, ask them why they liked or disliked that kind of movie (get a couple of reasons).8. Finally, end the conversation gracefully ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "6",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rethinking the agreement in human evaluation tasks",
"authors": [
{
"first": "Jacopo",
"middle": [],
"last": "Amidei",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Piwek",
"suffix": ""
},
{
"first": "Alistair",
"middle": [],
"last": "Willis",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacopo Amidei, Paul Piwek, and Alistair Willis. 2018. Rethinking the agreement in human evaluation tasks. In COLING.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Adversarial removal of demographic attributes revisited",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Yanai",
"middle": [],
"last": "Elazar",
"suffix": ""
},
{
"first": "Desmond",
"middle": [],
"last": "Elliott",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, and Anders S\u00f8gaard. 2019. Adver- sarial removal of demographic attributes revisited. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Data statements for natural language processing: Toward mitigating system bias and enabling better science",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Batya",
"middle": [],
"last": "Friedman",
"suffix": ""
}
],
"year": 2018,
"venue": "TACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Bender and Batya Friedman. 2018. Data state- ments for natural language processing: Toward mit- igating system bias and enabling better science. In TACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Taskmaster-1: Toward a realistic and diverse dialog dataset",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Krishnamoorthi",
"suffix": ""
},
{
"first": "Chinnadhurai",
"middle": [],
"last": "Sankar",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Goodrich",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Duckworth",
"suffix": ""
},
{
"first": "Semih",
"middle": [],
"last": "Yavuz",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Kyu-Young",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Cedilnik",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4516--4525",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1459"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Ben Goodrich, Daniel Duckworth, Semih Yavuz, Amit Dubey, Kyu-Young Kim, and Andy Cedilnik. 2019. Taskmaster-1: To- ward a realistic and diverse dialog dataset. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4516- 4525, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards conversational recommender systems",
"authors": [
{
"first": "Konstantina",
"middle": [],
"last": "Christakopoulou",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Radlinski",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16",
"volume": "",
"issue": "",
"pages": "815--824",
"other_ids": {
"DOI": [
"10.1145/2939672.2939746"
]
},
"num": null,
"urls": [],
"raw_text": "Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards conversational rec- ommender systems. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '16, pages 815-824, New York, NY, USA. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Modelling annotator bias with multi-task Gaussian processes: An application to machine translation quality estimation",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "32--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Lucia Specia. 2013. Modelling an- notator bias with multi-task Gaussian processes: An application to machine translation quality estimation. In Proceedings of the 51st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 32-42, Sofia, Bulgaria. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Complex linguistic annotation -no easy way out! a case from bangla and hindi pos labeling tasks",
"authors": [
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Priyanka",
"middle": [],
"last": "Biswas",
"suffix": ""
}
],
"year": 2009,
"venue": "LAW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandipan Dandapat, Priyanka Biswas, Monojit Choud- hury, and Kalika Bali. 2009. Complex linguistic an- notation -no easy way out! a case from bangla and hindi pos labeling tasks. In LAW.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Detecting errors in part-ofspeech annotation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dickinson",
"suffix": ""
}
],
"year": 2003,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dickinson. 2003. Detecting errors in part-of- speech annotation. In EACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Annotating genericity: a survey, a scheme, and a corpus",
"authors": [
{
"first": "Annemarie",
"middle": [],
"last": "Friedrich",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Melissa",
"middle": [
"Peate"
],
"last": "S\u00f8rensen",
"suffix": ""
},
{
"first": "Manfred",
"middle": [],
"last": "Pinkal",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annemarie Friedrich, Alexis Palmer, Melissa Peate S\u00f8rensen, and Manfred Pinkal. 2015. Annotating genericity: a survey, a scheme, and a corpus. In LAW.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets",
"authors": [
{
"first": "Mor",
"middle": [],
"last": "Geva",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1161--1166",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1107"
]
},
"num": null,
"urls": [],
"raw_text": "Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an inves- tigation of annotator bias in natural language under- standing datasets. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1161-1166, Hong Kong, China. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Inspecting the structural biases of dependency parsing algorithms",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "234--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Goldberg and Michael Elhadad. 2010. Inspect- ing the structural biases of dependency parsing al- gorithms. In Proceedings of the Fourteenth Confer- ence on Computational Natural Language Learning, pages 234-242, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Annotating article errors in spanish learner texts: design and evaluation of an annotation scheme",
"authors": [],
"year": 2014,
"venue": "PACLIC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Del Pilar Valverde Ibanez and Akira Ohtani. 2014. Annotating article errors in spanish learner texts: design and evaluation of an annotation scheme. In PACLIC.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards deep conversational recommendations",
"authors": [
{
"first": "Raymond",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Samira",
"middle": [
"Ebrahimi"
],
"last": "Kahou",
"suffix": ""
},
{
"first": "Hannes",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michalski",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Pal",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "9725--9735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommenda- tions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems 31, pages 9725-9735. Curran Associates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing tweets into universal dependencies",
"authors": [
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah Smith. 2018. Parsing tweets into universal dependencies. In NAACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building non-normative systems -the search for robustness",
"authors": [
{
"first": "Mitch",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1982,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitch Marcus. 1982. Building non-normative systems -the search for robustness. In ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning part-of-speech taggers with inter-annotator agreement loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In EACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Coached conversational preference elicitation: A case study in understanding movie preferences",
"authors": [
{
"first": "Filip",
"middle": [],
"last": "Radlinski",
"suffix": ""
},
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Krishnamoorthi",
"suffix": ""
}
],
"year": 2019,
"venue": "SigDial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filip Radlinski, Krisztian Balog, Bill Byrne, and Karthik Krishnamoorthi. 2019. Coached conversa- tional preference elicitation: A case study in under- standing movie preferences. In SigDial.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "incredibly affected by] Terminator 2 I [have as my all \ufffdme favorite movie] Terminator 2 I [am out of this world passionate about] Terminator 2 Original Perturbed Example of test sentence permutations.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "mention given priming word:",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Probability that a guideline goal x 1 is mentioned before another one x 2 in an actual dialogue, given that x 1 comes before x 2 in the agent's guideline.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>Label</td><td colspan=\"2\">train dev</td><td colspan=\"2\">test Reddit</td></tr><tr><td>NONE</td><td colspan=\"2\">4508 535</td><td>545</td><td>60</td></tr><tr><td>MOVIE OR SERIES</td><td colspan=\"2\">2736 346</td><td>313</td><td>119</td></tr><tr><td colspan=\"3\">MOVIE GENRE OR CATEGORY 1274 169</td><td>166</td><td>20</td></tr><tr><td>PERSON</td><td>66</td><td>6</td><td>9</td><td>11</td></tr><tr><td>SOMETHING ELSE</td><td>21</td><td>0</td><td>0</td><td>1</td></tr><tr><td>total</td><td colspan=\"3\">8605 1056 1033</td><td>211</td></tr></table>",
"num": null,
"text": "Comparison of in-sample F 1 performance, performance on the same data with like replaced with phrases with similar meaning, and performance on Reddit data. Results are reported for training models on biased CCPE-M as well as a debiased CCPE-M thesaurus which improves model performance in almost all cases."
},
"TABREF2": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": ""
}
}
}
}