ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:21:31.726788Z"
},
"title": "Identifying and Resolving Annotation Changes for Natural Language Understanding",
"authors": [
{
"first": "Jose",
"middle": [
"Garrido"
],
"last": "Ramas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Amazon Alexa AI",
"location": {
"country": "Germany"
}
},
"email": "[email protected]"
},
{
"first": "Giorgio",
"middle": [],
"last": "Pessot",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Martin",
"middle": [],
"last": "Rajman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "EPFL",
"location": {
"settlement": "Lausanne",
"country": "Switzerland"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Annotation conflict resolution is crucial towards building machine learning models with acceptable performance. Past work on annotation conflict resolution had assumed that data is collected at once, with a fixed set of annotators and fixed annotation guidelines. Moreover, previous work dealt with atomic labeling tasks. In this paper, we address annotation conflict resolution for Natural Language Understanding (NLU), a structured prediction task, in a real-world setting of commercial voice-controlled personal assistants, where (1) regular data collections are needed to support new and existing functionalities, (2) annotation guidelines evolve over time, and (3) the pool of annotators changes across data collections. We devise an approach combining information-theoretic measures and a supervised neural model to resolve conflicts in data annotation. We evaluate our approach both intrinsically and extrinsically on a real-world dataset with 3.5M utterances of a commercial dialog system in German. Our approach leads to dramatic improvements over a majority baseline especially in contentious cases. On the NLU task, our approach achieves 2.75% error reduction over a no-resolution baseline.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Annotation conflict resolution is crucial towards building machine learning models with acceptable performance. Past work on annotation conflict resolution had assumed that data is collected at once, with a fixed set of annotators and fixed annotation guidelines. Moreover, previous work dealt with atomic labeling tasks. In this paper, we address annotation conflict resolution for Natural Language Understanding (NLU), a structured prediction task, in a real-world setting of commercial voice-controlled personal assistants, where (1) regular data collections are needed to support new and existing functionalities, (2) annotation guidelines evolve over time, and (3) the pool of annotators changes across data collections. We devise an approach combining information-theoretic measures and a supervised neural model to resolve conflicts in data annotation. We evaluate our approach both intrinsically and extrinsically on a real-world dataset with 3.5M utterances of a commercial dialog system in German. Our approach leads to dramatic improvements over a majority baseline especially in contentious cases. On the NLU task, our approach achieves 2.75% error reduction over a no-resolution baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Supervised learning is ubiquitous as a form of learning in NLP Finkel et al., 2005; Rajpurkar et al., 2016) , but supervised models require access to high-quality and manually annotated data so that they perform reasonably. It is often assumed that (1) such annotated data is collected once and then used to train and test various models, (2) the pool of annotators is fixed, and (3) annotation guidelines are fixed (Benikova et al., 2014; Manning, 2011; Poesio and Artstein, 2005; Versley, 2006) . In real-world NLP applications e.g., voice-controlled assistants such as Google Home or Amazon Alexa, such assumptions are unrealistic. The assistant is continuously evolving and extended with new functionalities, and hence, changes to annotation guidelines are frequent. The assistant also needs to adapt to language variations over time, both lexical and semantic. Therefore, annotated data needs to be collected regularly i.e., new collections of data at different time points, where the same utterance text can be re-annotated over time. Additionally, the set of annotators might change across collections. In this work, we tackle the problem of resolving annotation conflicts in a real-world scenario of a commercial personal assistant.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "Finkel et al., 2005;",
"ref_id": "BIBREF8"
},
{
"start": 84,
"end": 107,
"text": "Rajpurkar et al., 2016)",
"ref_id": "BIBREF23"
},
{
"start": 416,
"end": 439,
"text": "(Benikova et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 440,
"end": 454,
"text": "Manning, 2011;",
"ref_id": "BIBREF17"
},
{
"start": 455,
"end": 481,
"text": "Poesio and Artstein, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 482,
"end": 496,
"text": "Versley, 2006)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To minimize annotation conflicts, the same data point is often labeled by multiple annotators and the annotation with unanimous agreement, or the one with majority votes is deemed correct (Benikova et al., 2014; Bobicev and Sokolova, 2017; Brants, 2000) . While such measures ensure the quality of annotations within the same batch, they cannot ensure it across batches at different time points, particularly when the same data point is present in different batches with inevitable changes to annotation guidelines. For detecting and resolving conflicts, two main methodologies have been explored; Bayesian modeling and training a supervised classification model (Hovy et al., 2013; Plank et al., 2014; Snow et al., 2008; Versley and Steen, 2016; Volokh and Neumann, 2011) . Both methodologies make certain assumptions about the setting, for example, annotation guidelines and the pool of annotators are fixed, which is not the case for our use case. Additionally, while Bayesian modeling is reasonably efficient for small datasets, it is prohibitively expensive for large-scale datasets with millions of utterances. We adopt a combination of information-theoretic measures and a classification neural model to detect and resolve conflicts.",
"cite_spans": [
{
"start": 188,
"end": 211,
"text": "(Benikova et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 212,
"end": 239,
"text": "Bobicev and Sokolova, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 240,
"end": 253,
"text": "Brants, 2000)",
"ref_id": "BIBREF4"
},
{
"start": 663,
"end": 682,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 683,
"end": 702,
"text": "Plank et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 703,
"end": 721,
"text": "Snow et al., 2008;",
"ref_id": "BIBREF26"
},
{
"start": 722,
"end": 746,
"text": "Versley and Steen, 2016;",
"ref_id": "BIBREF29"
},
{
"start": 747,
"end": 772,
"text": "Volokh and Neumann, 2011)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NLU is a key component in language-based applications, and is defined as the combination of: (1) An Intent Classifier (IC), which classifies an utterance into one of N intent labels (e.g. PlayMusic), and (2) A slot labeling (SL) model, which classifies Figure 1 : An example utterance with two conflicting annotations, a 1 and a 2 . The phrase turn on has two conflicting slot labels. AT stands for ActionTrigger. Non-entities are labeled with O (i.e., Other).",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "tokens into slot types, out of a predefined set (e.g. SongName) (Goo et al., 2018; Jolly et al., 2020) . An example utterance is shown in Figure 1 , with two conflicting annotations. In this paper, we consider the task of NLU for personal assistants and assume that utterances arrive at different points in time, and that the annotation guideline evolves over time. The same utterance text, e.g., the one shown in Figure 1 , often occurs multiple times across collections, which gives the opportunity to conflicting annotations. Moreover, changes to the annotation guidelines over time lead to more conflicts.",
"cite_spans": [
{
"start": 64,
"end": 82,
"text": "(Goo et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 83,
"end": 102,
"text": "Jolly et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 138,
"end": 146,
"text": "Figure 1",
"ref_id": null
},
{
"start": 414,
"end": 422,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an NLU dataset with utterances having multiple, possibly conflicting annotations (IC and SL), our goal is to find the right annotation for each such utterance. To this end, we first detect guideline changes using a maximum information gain cut (Section 3.3). Then we compute the normalized entropy of the remaining annotations after dropping the ones before a guideline change. In case this entropy is low, we simply use majority voting, otherwise, we rely on a classifier neuralbased model to resolve the conflict (Section 3.4). Our approach is depicted in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 572,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our approach both intrinsically and extrinsically, and show improved performance over baselines including random resolution or no resolution in six domains, as detailed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Annotation conflicts could emerge due to different reasons, be it imprecision in the annotation guideline (Manning, 2011; van Deemter and Kibble, 2000) , vagueness in the meaning of the underlying text (Poesio and Artstein, 2005; Recasens et al., 2011 Recasens et al., , 2010 Versley, 2006) , or annotators being careless or inexperienced (Manning, 2011; Hovy et al., 2013) . Manning et al. (2011) report, on the WSJ Part-of-Speech (POS) corpus, that 28.0% of POS tagging errors stem from imprecise annotation guideline that caused inconsistent annotations, while 15.5% of the errors are due to wrong gold standard, which could be attributed to careless or inexperienced annotators. In our case, conflicts could occur due to changes to the annotation guidelines and having different, possibly inexperienced, annotators within and across data collections.",
"cite_spans": [
{
"start": 106,
"end": 121,
"text": "(Manning, 2011;",
"ref_id": "BIBREF17"
},
{
"start": 122,
"end": 151,
"text": "van Deemter and Kibble, 2000)",
"ref_id": "BIBREF27"
},
{
"start": 202,
"end": 229,
"text": "(Poesio and Artstein, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 230,
"end": 251,
"text": "Recasens et al., 2011",
"ref_id": "BIBREF25"
},
{
"start": 252,
"end": 275,
"text": "Recasens et al., , 2010",
"ref_id": "BIBREF24"
},
{
"start": 276,
"end": 290,
"text": "Versley, 2006)",
"ref_id": "BIBREF28"
},
{
"start": 339,
"end": 354,
"text": "(Manning, 2011;",
"ref_id": "BIBREF17"
},
{
"start": 355,
"end": 373,
"text": "Hovy et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Past work on conflict resolution has assumed that data is collected once and then used for model training and testing. Consequently, the proposed methods to detect and resolve conflicts are geared towards this setting (Benikova et al., 2014; Manning, 2011; Poesio and Artstein, 2005; Recasens et al., 2011 Recasens et al., , 2010 van Deemter and Kibble, 2000; Versley, 2006) . In our scenario, we deal with an ever-growing data which is collected across different data collections at different time points. This increases the likelihood of conflicts especially with frequent changes to the annotation guideline. In Dickinson and Meurers (2003) , an approach is proposed to automatically detect annotation errors in gold standard annotations for POS tagging using n-gram tag variation i.e., looking at n-grams occurring in the corpus with multiple tagging.",
"cite_spans": [
{
"start": 218,
"end": 241,
"text": "(Benikova et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 242,
"end": 256,
"text": "Manning, 2011;",
"ref_id": "BIBREF17"
},
{
"start": 257,
"end": 283,
"text": "Poesio and Artstein, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 284,
"end": 305,
"text": "Recasens et al., 2011",
"ref_id": "BIBREF25"
},
{
"start": 306,
"end": 329,
"text": "Recasens et al., , 2010",
"ref_id": "BIBREF24"
},
{
"start": 330,
"end": 359,
"text": "van Deemter and Kibble, 2000;",
"ref_id": "BIBREF27"
},
{
"start": 360,
"end": 374,
"text": "Versley, 2006)",
"ref_id": "BIBREF28"
},
{
"start": 615,
"end": 643,
"text": "Dickinson and Meurers (2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Bayesian modeling is often used to model how reliable each annotator is and to correct/resolve wrong annotations (Hovy et al., 2013; Snow et al., 2008) . In Hovy et al. (2013) , they propose MACE, an item-response based model, to identify spammer annotators and to predict the correct underlying labels. Applying such models is prohibitively expensive in our case due to the large amount of utterances we deal with. Additionally, our annotator pool changes over time. A different line of work has explored resolving conflicts in a supervised classification setting, similar to our approach for resolving high normalized entropy conflicts. Volokh and Neumann (2011) use an ensemble of two off-the-shelf parsers that re-annotate the training set to detect and resolve conflicts in dependency treebanks. Versley et al. (2016) use a similar approach on out-of-domain treebanks. Finally, Plank et al. (2014) introduce the inter-annotator agreement loss to ensure consistent annotations for POS tagging.",
"cite_spans": [
{
"start": 113,
"end": 132,
"text": "(Hovy et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 133,
"end": 151,
"text": "Snow et al., 2008)",
"ref_id": "BIBREF26"
},
{
"start": 157,
"end": 175,
"text": "Hovy et al. (2013)",
"ref_id": "BIBREF12"
},
{
"start": 639,
"end": 664,
"text": "Volokh and Neumann (2011)",
"ref_id": "BIBREF30"
},
{
"start": 801,
"end": 822,
"text": "Versley et al. (2016)",
"ref_id": "BIBREF29"
},
{
"start": 883,
"end": 902,
"text": "Plank et al. (2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Intent classification and slot labeling are two fundamental tasks in spoken language understanding, dating back to early 90's (Price, 1990) . With the rise of task-oriented personal assistants, the two tasks got more attention and progress has been made by applying various deep learning techniques (Abujabal and Gaspers, 2019; Goo et al., 2018; Conflicting annotations Max IG Cut NH",
"cite_spans": [
{
"start": 126,
"end": 139,
"text": "(Price, 1990)",
"ref_id": "BIBREF22"
},
{
"start": 299,
"end": 327,
"text": "(Abujabal and Gaspers, 2019;",
"ref_id": "BIBREF0"
},
{
"start": 328,
"end": 345,
"text": "Goo et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 346,
"end": 346,
"text": "",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Majority Voting LSTM-based model Figure 2 : Our approach for conflict resolution. Given conflicting annotations, we first use the Max Information Gain (IG) Cut to detect changes in annotation guidelines. Then, low entropy conflicts are resolved using majority voting. High entropy conflicts are resolved using a classifier LSTM-based model. Jolly et al., 2020; Mesnil et al., 2013; Zhang and Wang, 2016 ). While we focus on resolving annotation conflicts for NLU with linear labeling i.e., intent and slot labels, our approach can be still used for other more complex tree-based labeling e.g., labeling dependency parses or ontology trees (Chen and Manning, 2014) , with the minor change of replacing the task-specific neural LSTM-based classification model. We plan to investigate this in the future.",
"cite_spans": [
{
"start": 341,
"end": 360,
"text": "Jolly et al., 2020;",
"ref_id": "BIBREF13"
},
{
"start": 361,
"end": 381,
"text": "Mesnil et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 382,
"end": 402,
"text": "Zhang and Wang, 2016",
"ref_id": "BIBREF33"
},
{
"start": 639,
"end": 663,
"text": "(Chen and Manning, 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 33,
"end": 41,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "High",
"sec_num": null
},
{
"text": "3 Annotation Conflict Resolution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "High",
"sec_num": null
},
{
"text": "Given multiple conflicting annotations of an utterance, our goal is to find the right annotation. We assume that annotations arrive at different points in time and that the same utterance can be reannotated over time. Moreover, we assume that annotators might differ both within and across data collections, that each annotation is time stamped, and that there is always one correct annotation. Our pipeline for conflict resolution is depicted in Figure 2 . Given an utterance with conflicting annotations, we first detect guideline changes using a maximum information gain cut. Then we compute the normalized entropy of the remaining annotations i.e., without the annotations before a guideline change. In case this entropy is low, we simply use majority voting, otherwise, we rely on a classifier model to resolve the conflict. A natural choice to easily resolving annotation conflicts is to use majority voting. However, we argue that this is not sufficient for our use case, where (1) regular data collection and annotation are required at different time points, and (2) changes to annotation guideline are frequent. We use the normalized entropy to detect whether there is high or low disagreement among annotations. In the extreme case where the normalized entropy is 1, majority voting gives a random output and any model that performs better than random will be better than majority voting in resolving conflicts. In our experiments we show that, for high normalized entropy values, the classifier model significantly outperforms majority voting.",
"cite_spans": [],
"ref_spans": [
{
"start": 447,
"end": 455,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Note that our conflict resolution pipeline does not drop utterances with wrong annotations, but rather replaces the wrong annotations with the correct ones. We do so to avoid changing the data distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "We apply our pipeline to training data only. The test set is of higher quality compared to the train set as each collection of test set data is annotated multiple times and we use the most recent test set collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "Entropy measures the uncertainty of a probability distribution (Yang and Qiu, 2014) . Given an utterance present N times in the dataset and annotated in K distinct ways, each occurring n i times such that K i=1 n i = N , we define the normalized empirical entropy of the list of conflicting annotations A, N H(A) as:",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Yang and Qiu, 2014)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Entropy",
"sec_num": "3.2"
},
{
"text": "N H(A) = \u2212 K i=1 n i N * log ( n i N ) log K , f or K > 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Entropy",
"sec_num": "3.2"
},
{
"text": "For example, assume an utterance u with three distinct annotations; a 1 , a 2 and a 3 . Then, the list A corresponds to {a 1 , a 2 , a 3 }, K = 3, and p i of each annotation corresponds to its relative frequency in the dataset ( n i N ) (Mahendra et al., 2014) . In this work, we harness normalized entropy (NH) to determine whether majority voting should be used. NH is a value between 0 and 1, where the higher it is, the harder the conflict. In the edge case of a uniform distribution, where NH is 1, majority voting gives a random output. Therefore, in such cases, we do not rely on majority voting for conflict resolution but rather on a classification model. We use the normalized entropy over entropy as the latter increases as K increases when the distribution is uniform. For example, assume K = 3 and distribution is uniform, then entropy is H = log 3, and N H = 1. If K = 2 and distribution is uniform, then H = log 2 and N H = 1, and so on. When the distribution is uniform (and thus majority voting will be outperformed by a model regardless of K), NH takes its maximum value of 1, while H increases as K increases (Kv\u00e5lseth, 2017).",
"cite_spans": [
{
"start": 237,
"end": 260,
"text": "(Mahendra et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized Entropy",
"sec_num": "3.2"
},
{
"text": "We rely on max information gain cut to find out if there was a change in the annotation scheme that caused a conflict, and to identify the exact date d of the change. Let us assume the relatively common case that there is exactly one relevant change in the guideline. Then, we aim to split the annotations of an utterance to two lists; one list containing annotations prior to the change, and the other one containing annotations after the change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes in Annotation Guideline: Max Information Gain Cut",
"sec_num": "3.3"
},
{
"text": "Inspired by methods used for splitting on a feature in decision trees (Mahendra et al., 2014) , we harness information gain (IG) to determine the date to split at. Concretely, given a list B of chronologically ordered annotations for the same utterance, and their corresponding annotation dates, we choose the date d that maximizes IG. If the value of IG is larger than a threshold IG 0 , we deem the annotations prior to d incorrect. The higher the IG is, the more probable the annotations prior to d to be incorrect. We define a boolean variable D which is true if the date of an annotation comes after d, and false otherwise. It divides the list of annotations B to two sublists, B b of size N b of annotations before date d, and B a of size N a of annotations after date d. We compute IG as follows:",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Mahendra et al., 2014)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Changes in Annotation Guideline: Max Information Gain Cut",
"sec_num": "3.3"
},
{
"text": "IG(B, D) = N H(B) \u2212 N H(B|D), where N H(B|D) = N b * N H(B b ) + N a * N H(B a ) N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes in Annotation Guideline: Max Information Gain Cut",
"sec_num": "3.3"
},
{
"text": "We use the normalized entropy (N H) for IG computation, as shown in the equation above. As a result, IG is no longer strictly positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Changes in Annotation Guideline: Max Information Gain Cut",
"sec_num": "3.3"
},
{
"text": "In the case of changes in the annotation guideline, there will be high disagreement among annotations before and after the change, and thus, N H(B) will be high. Moreover, annotations before the change will agree among each other, and similarly, for annotations after the change. Therefore, N H(B|D) will be low. Then IG(B, D) takes its maximum value at the date of the guideline change, and annotations after this date, which belong to the latest guideline, are correct. For example, for the following date-ordered annotations; {a 1 (03-2019), a 1 (07-2019), a 1 (08-2019), a 2 (10-2019), a 2 (11-2019), a 3 (12-2019), a 2 (01-2020), a 2 (02-2020)}, spliting at d = (08-2019) yields the highest IG value, as shown in Figure 3 . This indicates that there was a change in the annotation of this utterance on 08-2019. Hence, a 1 annotation is deemed wrong. In Section 4.2, we empirically prove that for high IG values, a large percentage of annotations occurring in the first half of the Max IG Cut split is incorrect, whereas a large percentage of annotations in the second half is correct. After the split, N H is computed for the remaining annotations i.e., annotations after d. If N H is less than a threshold N H 0 , we assign the utterance the annotation with maximum frequency (i.e., majority voting). In the example above, N H is low after the split, and the conflict is resolved by changing all annotations (i.e., a 1 and a 3 ) to a 2 . Our reasoning is that, when N H is high, majority voting will likely be outperformed by an alternative model (LSTM-based method, explained next) as there is high disagreement between the annotators. Note that we do not drop any utterances, we replace wrong annotations with the correct ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 718,
"end": 726,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Changes in Annotation Guideline: Max Information Gain Cut",
"sec_num": "3.3"
},
{
"text": "To make classification in the ambiguous high NH cases, we use a supervised classifier trained on the unambiguous examples from our data, in this case an LSTM-based neural model (Hochreiter and Schmidhuber, 1997) . For the following list of annotations, {a 1 , a 2 , a 3 , a 2 , a 1 , a 3 }, no split with IG greater than a threshold can be found, and N H = 1. For such utterances, we rely on a neural model to estimate the probability of each annotation i.e., a 1 , a 2 , and a 3 . Then we assign the annotation with highest probability to the utterance. Concretely, we use the model of Chiu et al. (2016), a bidirectional word-level LSTM model with a character-based CNN layer. A softmax layer is used on top of the output of the bidirectional LSTM, which computes a probability distribution over the output slot labels for a given input token. We extend the model to a multi-task setting to support IC by concatenating the last hidden states of the Bi-LSTM, and passing them to a softmax layer, similar to Yang et al. (2016) . We harness the probabilities of the output of the softmax layer and compute the final probability of the annotation by multiplying the probability of each of its slots and of the intent.",
"cite_spans": [
{
"start": 177,
"end": 211,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 1008,
"end": 1026,
"text": "Yang et al. (2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 253,
"end": 281,
"text": "{a 1 , a 2 , a 3 , a 2 , a 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "High Entropy Conflicts: LSTM",
"sec_num": "3.4"
},
{
"text": "In this section we evaluate our method both intrinsically and extrinsically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Data. We use a real-world dataset of a commercial dialog system in German, belonging to six different domains covering different, macro-purposes like, for instance, musical or movies requests. For the purpose of IC and SL, domains are treated as separate datasets. Utterances were manually transcribed and annotated with domain, intent and slot labels across many different batches at different points of time. In total we have 3.5M and 560K training and testing utterances, respectively. The percentage of conflicts in the training data varies across domains, ranging from 4.9% to 10.9%. Most conflicts are of high entropy, as shown in Figure 4 . The test set is of higher quality compared to the train set as each collection of test set data is annotated twice. Generally, the test set has lower number of conflicts compared to the train set. We do not resolve the conflicts in the test data to avoid artificial inflation of results. LSTM model. For high entropy conflicts, we use a single layer network for the forward and the back- Figure 5 : Accuracy of the rule change detection method described in Section 3.3. For high IG values, the accuracy of annotations after a date d, at which there is a guideline change, is 90%, while the accuracy of annotations before d is over 80%.",
"cite_spans": [],
"ref_spans": [
{
"start": 637,
"end": 645,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 1036,
"end": 1044,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "ward LSTMs whose dimensions are set to 256. We use Glove pretrained German word embeddings (Pennington et al., 2014) with 300 dimensions. For the CNN layer, character embeddings were initialized randomly with 25 dimensions. We used a mini-batch Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.001. We tried different optimizers with different learning rates (e.g., stochastic gradient descent), however, they performed worse than Adam. We also applied Dropout of 0.5 to each LSTM output (Hinton et al., 2012) . For training, we use the data described above (i.e., 3.5M utterances) after applying the Max IG Cut and majority voting to resolve low entropy conflicts, as described in Section 3.3. Highentropy conflicts are left unresolved. After 10 epochs, training is terminated. After training is done, the model is used for conflict resolution for high entropy cases.",
"cite_spans": [
{
"start": 91,
"end": 116,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 502,
"end": 523,
"text": "(Hinton et al., 2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "To asses the quality of our method, an expert linguist is asked to resolve 490 conflicts in two different domains e.g., Music. The linguist is asked to use the latest annotation guideline. On average, we have 12.6 utterances per conflict, with a total number of 6173 utterances for the 490 conflicts. The maximum number of utterances of a conflict is 181. On the annotation side, the maximum number of unique annotations of a conflict is 8, while the average number is 2.35 (Table 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 474,
"end": 483,
"text": "(Table 1)",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "We used our pipeline to resolve the 490 conflicts that were resolved by the linguist, where 229 conflicts out of the 490 were resolved with the LSTM model, which means that 46.7% of the conflicts were of high normalized entropy (\u2265 N H 0 = 0.75). The remaining 261 conflicts were resolved with majority voting. 120 out of the 490 conflicts had at least one guideline change (Table 2) . Max IG cut. For those conflicts with guideline changes we evaluate, after splitting the list of annotations at date d, whether the annotations after d are correct (a i af ter ), and whether the annotations before d are incorrect (a i bef ore ). To this end, for each conflict with IG \u2265 0.2, we compare each annotation after and before d with the ground-truth annotation (a gt ) provided by the linguist. a i af ter annotations should be correct, therefore, accuracy is 1 if a i af ter agrees with a gt , and 0 otherwise. On the other hand, a i bef ore annotations should be incorrect, and hence, accuracy is 1 if a i bef ore does not agree with a gt , and 0 otherwise. We compute the average accuracy over a i af ter annotations and the average accuracy over a i bef ore annotations for each conflict. We also compute the average across those conflicts with the same IG value.",
"cite_spans": [],
"ref_spans": [
{
"start": 373,
"end": 382,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "We depicted the results in Figure 5 . For high IG values, high accuracies are achieved for annotations after and before a split at a date d. For example, at IG = 0.9, the accuracy of annotations before d is almost 0.83, while the accuracy of annotations after d is 0.90. This shows that our max IG cut method was able to identify the right date d to split the list of annotations at for the majority of conflicts with guideline changes. We set IG 0 to 0.4. Majority Voting vs. LSTM. We evaluate the resolution of the 490 conflicts with the LSTM-based model and majority voting at different levels of NH. For each conflict, we apply the max IG cut and then Figure 6 : Accuracy with majority voting (orange) and with the LSTM-based method (blue) on the 490 conflicts with respect to ground-truth resolution provided by the linguist. For high values of NH, the LSTMbased model performs better than majority voting. resolve it using both methods of majority voting and LSTM. We then compare the final annotation each method delivers as correct with that delivered by the linguist. If both agree, then accuracy is 1, and 0 otherwise. For each N H value, we compute the average accuracy of the set of 50 conflicts with closest N H.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 35,
"text": "Figure 5",
"ref_id": null
},
{
"start": 656,
"end": 664,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "As expected, the accuracy with majority voting significantly drops with high entropy conflicts, as shown in Figure 6 . The LSTM-based model becomes more accurate as NH increases, reaching the highest accuracy in the case where N H = 1. In the training data, 29.3% of conflicts have N H = 1. As seen in the figure, accuracy diverges at N H = 0.75, which we use as N H 0 . That is, if N H \u2265 0.75, we use the LSTM-based model, and majority voting otherwise. For N H below 0.75, both majority voting and the LSTM-based model behave similarly, however, we use majority voting for low entropies as it is more intuitive.",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intrinsic Evaluation",
"sec_num": "4.2"
},
{
"text": "To evaluate our method extrinsically on the downstream task of NLU, we trained a multi-task LSTMbased neural model for intent classification and slot labeling on the 3.5M utterances after resolving annotation conflicts using our proposed method (Figure 2) . Architecture-wise, the model is similar to the one we use for conflict resolution, described in Section 3.4. We compared this model with two baseline models trained as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 245,
"end": 255,
"text": "(Figure 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect on NLU",
"sec_num": "4.3"
},
{
"text": "1. NoResolution: this model was trained on the full training data without conflict resolution (i.e., 3.5M utterances). Table 3 : Results on the NLU task. Our pipeline achieved 2.75% relative change in error rate with respect to the NoResolution baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect on NLU",
"sec_num": "4.3"
},
{
"text": "2. Rand: We trained this model with conflicts resolved by choosing one annotation randomly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect on NLU",
"sec_num": "4.3"
},
{
"text": "The three models were tested on the same test set described above (560K utterances). We report the relative change in error rate with respect to the NoResolution model. The error rate is defined as the fraction of utterances in which there is at least an error either in IC or in SL. Results are shown in Table 3 . Overall, random conflict resolution slightly reduced the error rate with 0.55% relative change on average across domains, while our method achieved 2.75% error reduction. For each of the six domains, resolving conflicts with our method improves performance over random resolution and over no resolution. In one domain, a reduction in error rate of 4.7% is observed. For five domains, the difference in performance passes a two-sided paired t-test for statistical significance at 95% confidence level.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect on NLU",
"sec_num": "4.3"
},
{
"text": "In this paper, we tackled the problem of annotation conflicts for the task of NLU for voice-controlled personal assistants. We presented a novel approach that combines information-theoretic measures and an LSTM-based neural model. We evaluated our method on a real-world large-scale dataset, both intrinsically and extrinsically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Although we focused on the task of NLU, our conflict resolution pipeline could be applied to any manual annotation task. In the future, we plan on investigating how the choice of the task-specific classification model affects performance. Moreover, we plan to study annotation conflict resolution for other NLP tasks e.g., PoS tagging and dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "We thank Melanie Bradford and our anonymous reviewers for their thoughtful comments and useful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural Named Entity Recognition from Subword Units",
"authors": [
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Gaspers",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "2663--2667",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2019-1305"
]
},
"num": null,
"urls": [],
"raw_text": "Abdalghani Abujabal and Judith Gaspers. 2019. Neu- ral Named Entity Recognition from Subword Units. In Proc. Interspeech 2019, pages 2663-2667.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Comqa: A community-sourced dataset for complex factoid question answering with paraphrase clusters",
"authors": [
{
"first": "Abdalghani",
"middle": [],
"last": "Abujabal",
"suffix": ""
},
{
"first": "Rishiraj",
"middle": [],
"last": "Saha Roy",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Yahya",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "307--317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdalghani Abujabal, Rishiraj Saha Roy, Mohamed Yahya, and Gerhard Weikum. 2019. Comqa: A community-sourced dataset for complex factoid question answering with paraphrase clusters. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2- 7, 2019, Volume 1 (Long and Short Papers), pages 307-317. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NoSta-d named entity annotation for German: Guidelines and dataset",
"authors": [
{
"first": "Darina",
"middle": [],
"last": "Benikova",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Reznicek",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "2524--2531",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darina Benikova, Chris Biemann, and Marc Reznicek. 2014. NoSta-d named entity annotation for Ger- man: Guidelines and dataset. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2524- 2531, Reykjavik, Iceland. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Interannotator agreement in sentiment analysis: Machine learning perspective",
"authors": [
{
"first": "Victoria",
"middle": [],
"last": "Bobicev",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Sokolova",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_015"
]
},
"num": null,
"urls": [],
"raw_text": "Victoria Bobicev and Marina Sokolova. 2017. Inter- annotator agreement in sentiment analysis: Machine learning perspective. In Proceedings of the Inter- national Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 97-102, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Inter-annotator agreement for a German newspaper corpus",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants. 2000. Inter-annotator agreement for a German newspaper corpus. In Proceed- ings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 740-750. ACL.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "P",
"middle": [
"C"
],
"last": "Jason",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "TACL",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason P. C. Chiu and Eric Nichols. 2016. Named en- tity recognition with bidirectional lstm-cnns. TACL, 4:357-370.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Detecting errors in part-of-speech annotation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dickinson",
"suffix": ""
},
{
"first": "W. Detmar",
"middle": [],
"last": "Meurers",
"suffix": ""
}
],
"year": 2003,
"venue": "10th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dickinson and W. Detmar Meurers. 2003. De- tecting errors in part-of-speech annotation. In 10th Conference of the European Chapter of the Associa- tion for Computational Linguistics, Budapest, Hun- gary. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporating non-local informa- tion into information extraction systems by gibbs sampling. In ACL 2005, 43rd Annual Meeting of the Association for Computational Linguistics, Proceed- ings of the Conference, 25-30 June 2005, University of Michigan, USA, pages 363-370. The Association for Computer Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Slot-gated modeling for joint slot filling and intent prediction",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chih-Wen Goo",
"suffix": ""
},
{
"first": "Yun-Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chih-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Huo",
"suffix": ""
},
{
"first": "Keng-Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT",
"volume": "2",
"issue": "",
"pages": "753--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 753-757. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving neural networks by preventing co-adaptation of feature detectors",
"authors": [
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Comput",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735- 1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning whom to trust with MACE",
"authors": [
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Eduard",
"middle": [
"H"
],
"last": "Hovy",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings",
"volume": "",
"issue": "",
"pages": "1120--1130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H. Hovy. 2013. Learning whom to trust with MACE. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceed- ings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 1120-1130. The Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Data-efficient paraphrase generation to bootstrap intent classification and slot labeling for new features in task-oriented dialog systems",
"authors": [
{
"first": "Shailza",
"middle": [],
"last": "Jolly",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Falke",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Tirkaz",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Sorokin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics: Industry Track",
"volume": "",
"issue": "",
"pages": "10--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shailza Jolly, Tobias Falke, Caglar Tirkaz, and Daniil Sorokin. 2020. Data-efficient paraphrase generation to bootstrap intent classification and slot labeling for new features in task-oriented dialog systems. In Pro- ceedings of the 28th International Conference on Computational Linguistics: Industry Track, pages 10-20, Online. International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On normalized mutual information: Measure derivations and properties",
"authors": [
{
"first": "O",
"middle": [],
"last": "Tarald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kv\u00e5lseth",
"suffix": ""
}
],
"year": 2017,
"venue": "Entropy",
"volume": "19",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tarald O. Kv\u00e5lseth. 2017. On normalized mutual in- formation: Measure derivations and properties. En- tropy, 19(11).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Information and Communication Technology",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Mahendra",
"suffix": ""
},
{
"first": "E",
"middle": [
"J"
],
"last": "Neuhold",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Tjoa",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2014,
"venue": "Second IFIP TC 5/8 International Conference, ICT-EurAsia",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.S. Mahendra, E.J. Neuhold, A.M. Tjoa, and I. You. 2014. Information and Communication Technology: Second IFIP TC 5/8 International Conference, ICT- EurAsia 2014, Bali, Indonesia, April 14-17, 2014, Proceedings. Lecture Notes in Computer Science. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Part-of-speech tagging from 97% to 100%: Is it time for some linguistics?",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Computational Linguistics and Intelligent Text Processing -12th International Conference, CI-CLing",
"volume": "6608",
"issue": "",
"pages": "171--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning. 2011. Part-of-speech tagging from 97% to 100%: Is it time for some linguis- tics? In Computational Linguistics and Intelligent Text Processing -12th International Conference, CI- CLing 2011, Tokyo, Japan, February 20-26, 2011. Proceedings, Part I, volume 6608 of Lecture Notes in Computer Science, pages 171-189. Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "3771--3775",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural- network architectures and learning methods for spo- ken language understanding. In INTERSPEECH 2013, 14th Annual Conference of the International Speech Communication Association, Lyon, France, August 25-29, 2013, pages 3771-3775. ISCA.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 1532-1543. ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning part-of-speech taggers with inter-annotator agreement loss",
"authors": [
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "742--751",
"other_ids": {
"DOI": [
"10.3115/v1/e14-1078"
]
},
"num": null,
"urls": [],
"raw_text": "Barbara Plank, Dirk Hovy, and Anders S\u00f8gaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Confer- ence of the European Chapter of the Association for Computational Linguistics, EACL 2014, April 26-30, 2014, Gothenburg, Sweden, pages 742-751. The As- sociation for Computer Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The reliability of anaphoric annotation, reconsidered: Taking ambiguity into account",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Workshop on Frontiers in Corpus Annotations II: Pie in the Sky@ACL 2005",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio and Ron Artstein. 2005. The relia- bility of anaphoric annotation, reconsidered: Taking ambiguity into account. In Proceedings of the Work- shop on Frontiers in Corpus Annotations II: Pie in the Sky@ACL 2005, Ann Arbor, MI, USA, June 29, 2005, pages 76-83. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluation of spoken language systems: the ATIS domain",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Price ; Morgan Kaufmann",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. J. Price. 1990. Evaluation of spoken language sys- tems: the ATIS domain. In Speech and Natural Lan- guage: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, USA, June 24-27, 1990. Mor- gan Kaufmann.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Squad: 100, 000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, 000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2383-2392. The Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A typology of near-identity relations for coreference (NIDENT)",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ant\u00f2nia Mart\u00ed",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Eduard Hovy, and M. Ant\u00f2nia Mart\u00ed. 2010. A typology of near-identity relations for coref- erence (NIDENT). In Proceedings of the Seventh In- ternational Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Identity, non-identity, and near-identity: Addressing the complexity of coreference",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "M Ant\u00f2nia",
"middle": [],
"last": "Mart\u00ed",
"suffix": ""
}
],
"year": 2011,
"venue": "Lingua",
"volume": "121",
"issue": "6",
"pages": "1138--1152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens, Eduard Hovy, and M Ant\u00f2nia Mart\u00ed. 2011. Identity, non-identity, and near-identity: Ad- dressing the complexity of coreference. Lingua, 121(6):1138-1152.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks",
"authors": [
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "254--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. In 2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Proceedings of the Conference, 25-27 October 2008, Honolulu, Hawaii, USA, A meeting of SIG- DAT, a Special Interest Group of the ACL, pages 254-263. ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "On coreferring: Coreference in MUC and related annotation schemes",
"authors": [
{
"first": "Rodger",
"middle": [],
"last": "Kees Van Deemter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kibble",
"suffix": ""
}
],
"year": 2000,
"venue": "Comput. Linguistics",
"volume": "26",
"issue": "4",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kees van Deemter and Rodger Kibble. 2000. On core- ferring: Coreference in MUC and related annotation schemes. Comput. Linguistics, 26(4):629-637.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Disagreement dissected: Vagueness as a source of ambiguity in nominal (co-) reference",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2006,
"venue": "Ambiguity in Anaphora Workshop Proceedings",
"volume": "",
"issue": "",
"pages": "83--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley. 2006. Disagreement dissected: Vagueness as a source of ambiguity in nominal (co-) reference. In Ambiguity in Anaphora Workshop Pro- ceedings, pages 83-89.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Detecting annotation scheme variation in out-of-domain treebanks",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
},
{
"first": "Julius",
"middle": [],
"last": "Steen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley and Julius Steen. 2016. Detecting annotation scheme variation in out-of-domain tree- banks. In Proceedings of the Tenth International Conference on Language Resources and Evalua- tion LREC 2016, Portoro\u017e, Slovenia, May 23-28, 2016. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Automatic detection and correction of errors in dependency treebanks",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Volokh",
"suffix": ""
},
{
"first": "G\u00fcnter",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2011,
"venue": "The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "346--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Volokh and G\u00fcnter Neumann. 2011. Au- tomatic detection and correction of errors in depen- dency treebanks. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Confer- ence, 19-24 June, 2011, Portland, Oregon, USA - Short Papers, pages 346-350. The Association for Computer Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Normalized expected utility-entropy measure of risk",
"authors": [
{
"first": "Jiping",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wanhua",
"middle": [],
"last": "Qiu",
"suffix": ""
}
],
"year": 2014,
"venue": "Entropy",
"volume": "16",
"issue": "",
"pages": "3590--3604",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiping Yang and Wanhua Qiu. 2014. Normalized ex- pected utility-entropy measure of risk. Entropy, 16:3590-3604.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-task cross-lingual sequence tagging from scratch",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tag- ging from scratch. CoRR, abs/1603.06270.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A joint model of intent determination and slot filling for spoken language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence",
"volume": "2016",
"issue": "",
"pages": "2993--2999",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spo- ken language understanding. In Proceedings of the Twenty-Fifth International Joint Conference on Arti- ficial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 2993-2999. IJCAI/AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "IG values at each date. The split at d =08-2019 has the highest IG value. We cannot split at the first and last dates.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Histogram of conflicts in the training data. Most conflicts have high entropy.",
"type_str": "figure"
},
"TABREF2": {
"content": "<table><tr><td>Guideline change detected</td><td>120</td></tr><tr><td>Resolved with LSTM model</td><td>229</td></tr><tr><td colspan=\"2\">Resolved with majority voting 261</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "Statistics on the 490 conflicts used for our evaluation."
},
"TABREF3": {
"content": "<table><tr><td>: Out of the 490 conflicts, 229 were resolved</td></tr><tr><td>with the LSTM model, while 261 conflicts were re-</td></tr><tr><td>solved with majority voting.</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
}
}
}
}