ACL-OCL / Base_JSON /prefixS /json /starsem /2021.starsem-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:40:22.520305Z"
},
"title": "Spurious Correlations in Cross-Topic Argument Mining",
"authors": [
{
"first": "Terne",
"middle": [],
"last": "Sasha",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Thorn",
"middle": [],
"last": "Jakobsen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": ""
},
{
"first": "Maria",
"middle": [],
"last": "Barrett",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {
"region": "IT"
}
},
"email": "[email protected]"
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Copenhagen",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on withintopic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-theart cross-topic model on distant target topics.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on withintopic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-theart cross-topic model on distant target topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When a sentiment analysis model associates the word Shrek with positive sentiment (Sindhwani and Melville, 2008) , it relies on a spurious correlation. While the movie Shrek was popular at the time the training data was sampled, this is unlikely to transfer across demographics, platforms and years. While there exists a continuum from sentiment words such as fantastic to spurious correlations such as Shrek, with words such as Hollywood or anticipation being perhaps in a grey zone, demoting spurious correlations is key to learning robust NLP models (Sutton et al., 2006; S\u00f8gaard, 2013; Tu et al., 2020) .",
"cite_spans": [
{
"start": 82,
"end": 112,
"text": "(Sindhwani and Melville, 2008)",
"ref_id": "BIBREF29"
},
{
"start": 553,
"end": 574,
"text": "(Sutton et al., 2006;",
"ref_id": "BIBREF35"
},
{
"start": 575,
"end": 589,
"text": "S\u00f8gaard, 2013;",
"ref_id": "BIBREF30"
},
{
"start": 590,
"end": 606,
"text": "Tu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper studies a similar problem in state-ofthe-art cross-topic argument mining systems. The task of argument mining is to recognise the existence of claims and premises in a text span. The All code will be publicly available at https:// github.com/terne/spurious_correlations_ in_argmin Figure 1 : In human interaction, it is evident that relying on topic words for recognizing an argument is nonsensical. It is, nevertheless, what a BERT-based crosstopic argument mining model does. standard evaluation protocol is to evaluate argument mining systems across topics, i.e., on heldout topics, precisely to avoid over-fitting to a single topic (Daxenberger et al., 2017; Stab et al., 2018; Reimers et al., 2019) . This study shows that despite this sensible cross-topic evaluation protocol, stateof-the-art systems nevertheless rely primarily on spurious correlations, e.g., guns (Figure 1 ). These spurious correlations transfer across some topics in popular benchmarks, but only because the topics are closely related.",
"cite_spans": [
{
"start": 647,
"end": 673,
"text": "(Daxenberger et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 674,
"end": 692,
"text": "Stab et al., 2018;",
"ref_id": "BIBREF33"
},
{
"start": 693,
"end": 714,
"text": "Reimers et al., 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 1",
"ref_id": null
},
{
"start": 883,
"end": 892,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present experiments with an out-of-the-box learning architecture for argument mining, yet with state-of-the-art performance, based on Microsoft's MT-DNN library (Liu et al., 2019) . We train models on the UKP Sentential Argument Mining Corpus (Stab et al., 2018) , the IBM Debater Argument Search Engine Dataset (Levy et al., 2018) , the Argument Extraction corpus (Swanson et al., 2015) , and the Vaccination Corpus (Morante et al., 2020) . We analyse the models with respect to spurious correlations using the post-hoc interpretability tool LIME (Ribeiro et al., 2016) and we find that the models rely heavily on these. This analysis is the paper's main contribution: In \u00a75, we: a) evaluate our best-performing model on a small set of challenge examples, which we make available, and which motivate our subsequent analyses; b) manually analyse how many of the words our models rely the most on are spurious correlations; c) evaluate how much weight our models attribute to open class words and whether multi-task training effectively moves emphasis to closed-class items that likely transfer better across topics; d) evaluate how much weight our models attribute to words in a manually constructed claim indicator list (Morante et al., 2020; , and whether multi-task training effectively moves emphasis to such claim indicators that likely transfer better across topics; and lastly e) evaluate the performance of models trained only on closedclass words or closed class and open class words that are shared across topics. Surprisingly, we find that models with access to only closed-class words, and a few common (topic-independent) open-class words, perform better across distant topics than our baseline, state-of-the-art models (Table 5) .",
"cite_spans": [
{
"start": 164,
"end": 182,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 246,
"end": 265,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 315,
"end": 334,
"text": "(Levy et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 368,
"end": 390,
"text": "(Swanson et al., 2015)",
"ref_id": "BIBREF36"
},
{
"start": 420,
"end": 442,
"text": "(Morante et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 546,
"end": 573,
"text": "LIME (Ribeiro et al., 2016)",
"ref_id": null
},
{
"start": 1224,
"end": 1246,
"text": "(Morante et al., 2020;",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1736,
"end": 1745,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contributions",
"sec_num": null
},
{
"text": "We first describe the task of argument mining, focusing, in particular, on the subtle difference between argument mining ('this is an argument for or against x') and stance detection ('this is an expression of opinion for or against x'). Both tasks are very relevant for social scientists, monitoring the dynamics of public opinion. Still, whereas stance detection can be used to see what fractions of demographic subgroups are in favor of or against some topic, argument mining can be used to identify the arguments made for and against policies in political discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument mining",
"sec_num": "2"
},
{
"text": "What is an argument? An argument is made up of propositions (claims), which are statements that are either true or false. Traditionally, an argument must consist of at least two claims, with one being the conclusion (major claim) and at least one reason (premise) backing up that claim. Some argument annotation schemes ask annotators to label premises and major claims separately (Lindahl et al., 2019) . Others simplify the task to identifying claim or claim-like sentences (Morante et al., 2020) or to whether sentences are claims supporting or opposing a particular idea or topic (Levy et al., 2018; Stab et al., 2018) . The resources used in our experiments below are of the latter type: Sentences are labeled as arguments if they present evidence or reasoning in relation to a claim or topic and are refutable. The resources used in our experiments are annotated with arguments in the context of a particular topic, as well as the argument's polarity, i.e., what is annotated relates to stance. The key difference between the current task and stance detection is that arguments require the author to present evidence or reasoning for or against the topic.",
"cite_spans": [
{
"start": 381,
"end": 403,
"text": "(Lindahl et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 476,
"end": 498,
"text": "(Morante et al., 2020)",
"ref_id": "BIBREF23"
},
{
"start": 584,
"end": 603,
"text": "(Levy et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 604,
"end": 622,
"text": "Stab et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument mining",
"sec_num": "2"
},
{
"text": "Spurious correlations of arguments Arguments for or against a policy typically refer to different concepts. Take, for example, discussions of minimum wage and the terms living wages and jobs. Since these terms are frequent in arguments for and against minimum wage, they will be predictive of arguments (in discussions of minimum wage). Still, mentions of the terms are not themselves markers of arguments, but simply spurious correlations of arguments. We use the same definition of spurious correlations as Wang and Culotta (2020) , mainly that a relationship between a term and a label is spurious if one cannot expect the term to be a determining factor for assigning the label. 1 Examples of the contrary are terms such as if and because (and to some degree stance terms), which one can reasonably expect to be determining factors for an argument to exist (and therefore to be stable across topics and time).",
"cite_spans": [
{
"start": 509,
"end": 532,
"text": "Wang and Culotta (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument mining",
"sec_num": "2"
},
{
"text": "The UKP Sentential Argument Mining Corpus (UKP) (Stab et al., 2018) contains 25,492 sentences spanning eight controversial topics (abortion, cloning, death penalty, gun control, marijuana legalization, school uniforms, minimum wage and nuclear energy), each annotated at the sentence level as one of three classes; NO ARGUMENT, AR-GUMENT AGAINST, and ARGUMENT FOR. For example, a sentence about death penalty may not be arguing for or against death penalty (NO ARGU-MENT), may present an argument against having death penalty as a punishment for a severe crime (ARGUMENT AGAINST), or may present an argument in favor of the same (ARGUMENT FOR). The data is annotated such that the evaluation of a sentence (being an argument or not) is not strictly dependent on the topic. However, it should still be unambiguously supportive of or against a topic. Claims will not be annotated as an argument unless they include some evidence or reasoning behind the claim; however, Lin et al. (2019) do find a few wrongly annotated sentences in this regard. The corpus comes with a fixed 70-10-20 split.",
"cite_spans": [
{
"start": 48,
"end": 67,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF33"
},
{
"start": 967,
"end": 984,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The IBM Debater Argument Search Engine Dataset (IBM) is from a larger dataset of argumentative sentences defined through query patterns by Levy et al. (2017 Levy et al. ( , 2018 . We use only the 2,500 sentences that are gold labelled -with binary labels, where positive labels were given to statements that directly support or contest a topic. The sentences are from Wikipedia articles and span 50 topics. Since the authors used queries to mine the examples, the data is imbalanced (70% positive). We introduce a random 70-30 split.",
"cite_spans": [
{
"start": 139,
"end": 156,
"text": "Levy et al. (2017",
"ref_id": "BIBREF18"
},
{
"start": 157,
"end": 177,
"text": "Levy et al. ( , 2018",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The Argument Extraction Corpus (AQ) (Swanson et al., 2015) contains 5,374 sentences annotated with argument quality on a continuous scale between 0 (hard to interpret the argument) and 1 (easy to interpret the argument). Of the corpora included in our study, this differs most from the others; however, the topics included are controversial topics (gun control, gay marriage, evolution, and death penalty), similar to the UKP Corpus. The sentences are partly from the Internet Argument Corpus (Walker et al., 2012) and partly from createdebate.com. We introduce a random 70-30 split.",
"cite_spans": [
{
"start": 36,
"end": 58,
"text": "(Swanson et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "The Vaccination Corpus (VacC) was presented in Morante et al. (2020) and consists of 294 documents from online debates on vaccination with marked claims. A claim is defined as opinionated statements wrt. vaccination. For our purpose, we split the documents into sentences (23,467). We use binary labels (claim or not) and introduce a random 70-10-20 split.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "Morante et al. (2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We now describe our learning architecture, an almost out-of-the-box application of the MT-DNN architecture in Liu et al. (2019) . It is a strong model that achieves a better performance than previously reported across the benchmarks.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "The MT-DNN model of Liu et al. (2019) combines the pre-trained BERT architecture with multitask learning. The model can be broken up into shared layers and task-specific layers. The shared layers are initialised with the pre-trained BERT base model (Devlin et al., 2019 ). We add a taskspecific output layer for each task and update all model parameters during training with AdaMax. The task-specific layers are logistic regression classifiers with softmax activation, minimising crossentropy loss functions for classification tasks or mean squared error for regression tasks. If we only have a single output layer, we refer to the architecture as single-task DNN (ST-DNN) rather than MT-DNN. We train all models over 10 epochs with a batch size of 5 for feasibility and otherwise use default hyperparameters.",
"cite_spans": [
{
"start": 20,
"end": 37,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF21"
},
{
"start": 249,
"end": 269,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Following Stab et al. 2018, we iteratively combine the training and validation data from seven of the eight topics of the UKP Corpus for training and parameter tuning and use the test data of the held-out topic for testing. We firstly treat the task as a single-sentence classification task and train an ST-DNN with the BERT-base model as shared layers. Since Tu et al. (2020) argues multi-task learning effectively reduces sensitivity to spurious correlations, we experiment with MT-DNN models based on different data and task combinations: For each auxiliary dataset (IBM, AQ, and VAcC), we train an MT-DNN model with the UKP Corpus as one task and the auxiliary data as another task. We denote the MT-DNN models as follows: MT-DNN+IBM refers to a model trained with the IBM data as an auxiliary claim classification task; MT-DNN+AQ is trained with AQ as an auxiliary regression task; MT-DNN+VacC is trained with VAcC data as an auxiliary claim classification task; MT-DNN+AQ+IBM+VacC is our largest model trained with all auxiliary tasks. Topic-MT-DNN provides us with an upper bound: In this setting, all topics are used in training and tuning, including the target topic, as eight separate tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "We evaluate the models on the UKP Corpus using the cross-topic evaluation protocol of (Stab et al., 2018) -training with seven topics and testing on a held-out topic. We report the average macro F 1 across five random seeds. In-topic, cross-topic and constrained models cannot be directly compared. Still, in-topic and constrained models provide upper and lower bounds in the sense that they represent scenarios where models are encouraged, respectively prohibited, to rely on spurious features. We report averages across 5 random seeds except \u2020 , which is only one run. The best performances per column within cross-topic models are boldfaced.",
"cite_spans": [
{
"start": 86,
"end": 105,
"text": "(Stab et al., 2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "els, we achieve an average macro F 1 of .642, which is a big improvement from the .429 reported by Stab et al. (2018) . Our ST-DNN model also outperforms the best-reported score in the literature, which, as far as we know, is .633 by Reimers et al. (2019) . Reimers et al. (2019) used BERT Large and, unlike us, integrated topic information in the model. Multi-task learning can improve the performance to .644, a 35% error reduction relative to the upper bound of training a model on all eight topics, i.e., including in-topic training data. We see a large variation in the performance across topics for all models, with the abortion topic being hardest to classify and cloning being easiest. With two classes -argument or not -the average macro F 1 is .776, again with large differences across topics; abortion being hardest to classify (.656) and minimum wage being easiest (.828) . To analyze our models, we use the popular post-hoc interpretability tool LIME (Ribeiro et al., 2016) . By training linear (logistic regression) models on perturbations of each instance, LIME learn interpretable models that locally approximate our models' decision boundaries. The weights of the LIME models tell us which features are locally important. 2 2 LIME has several weaknesses: LIME is linear (Bramhall et al., 2020) , unstable (Elshawi et al., 2019) and very sensitive to the width of the kernel used to assign weights to input example perturbations (Vlassopoulos, 2019; Kopper, 2019) , an increasing number of features also increases weight instability (Gruber, 2019) , and Vlassopoulos (2019) argues that with sparse data, sampling is insufficient. Laugel et al. (2018) argues the specific sampling technique is suboptimal. Since we use aggregate LIME statistics across hundreds of data points, these weaknesses should have limited impact on our results; LIME remains a de facto standard, and most alternatives suffer a) Challenge examples For an initial qualitative error analysis, 19 short text pieces are taken from exercises made by Jon M. Young for his Critical Thinking course at Fayetteville State University. 34 Of these, the first six are examples of sentences that comprise an argument or not, and if they do, the conclusions and premises have been annotated by Young. The last 13 examples are from exercises where we annotated the correct answers. We contrast the LIME analyses of the predictions of our best performing model, i.e. MT-DNN+VacC+IBM+AQ, as well as our ST-DNN baseline. 5 An example of the LIME explanations can be seen in Figure 2 . The remaining LIME explanations are in the appendix in Figures 4-7 .",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF33"
},
{
"start": 234,
"end": 255,
"text": "Reimers et al. (2019)",
"ref_id": null
},
{
"start": 258,
"end": 279,
"text": "Reimers et al. (2019)",
"ref_id": null
},
{
"start": 877,
"end": 883,
"text": "(.828)",
"ref_id": null
},
{
"start": 959,
"end": 986,
"text": "LIME (Ribeiro et al., 2016)",
"ref_id": null
},
{
"start": 1287,
"end": 1310,
"text": "(Bramhall et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1322,
"end": 1344,
"text": "(Elshawi et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1445,
"end": 1465,
"text": "(Vlassopoulos, 2019;",
"ref_id": "BIBREF39"
},
{
"start": 1466,
"end": 1479,
"text": "Kopper, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1549,
"end": 1563,
"text": "(Gruber, 2019)",
"ref_id": "BIBREF9"
},
{
"start": 1570,
"end": 1589,
"text": "Vlassopoulos (2019)",
"ref_id": "BIBREF39"
},
{
"start": 1646,
"end": 1666,
"text": "Laugel et al. (2018)",
"ref_id": "BIBREF16"
},
{
"start": 2114,
"end": 2116,
"text": "34",
"ref_id": null
},
{
"start": 2492,
"end": 2493,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2545,
"end": 2553,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 2611,
"end": 2622,
"text": "Figures 4-7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Out of the 19 examples, seven were incorrectly classified by our best model. Common to these misclassified examples is either a rather uncontroversial, everyday topic (4c, 4g, 5e) or a very informative language (4h, 5g, 5h ). Since the model was mainly trained on controversial topics, it is not surprising that these uncontroversial cases make the model misstep. While this is a tiny sample, these incorrect classifications do suggest that our models do not transfer well to any topic, possibly indicating they rely more on topic words than on from similar weaknesses or are prohibitively costly to run. argument markers. This is supported by the observation that open-class words -rather than argumentative language patterns -are given most of the weight towards the argument classes. Open-class words are defined as nouns, verbs and adjectives, and closed-class words are the remains. For example, we see \"guns\" as an argument indicator rather than \"if\" in 2a and 2b; we see \"people\" and \"needs\" emphasized more than \"if\" in 5f; and in 5i, the stance indicator \"disastrous\" and the open-class word \"television\" have large weights, while \"seems\" and \"caused\" are not emphasized at all. Overall, this suggests our models learn what arguments are about but not what constitutes an argument. The single-task model exhibits similar patterns. In fact, there seems to be little difference between what the two models attend to. This initial evaluation raises two questions: To what extent do our models rely on topic-specific spurious correlations with limited ability to transfer across (distant) topics instead of relying on more generic argument markers? And to what extent do simple regularization techniques like multi-task learning, as suggested in Tu et al. (2020) , prevent our models from over-fitting in this way? b) How many of the words we rely on are spurious? We generate and accumulate LIME explanations for our single-task models over the corresponding held-out topics' development sets to evaluate how much our models rely on spurious correlations. We accumulate LIME weights for words towards the predicted class. Words are sorted by accumulated weights, and we manually annotate the top k words for whether they are spurious.",
"cite_spans": [
{
"start": 1751,
"end": 1767,
"text": "Tu et al. (2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 211,
"end": 222,
"text": "(4h, 5g, 5h",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Specifically, and to better understand the distribution of word types, we divide the top 20 words into four categories: argument words, topic words, stance words, and other. We define argument words as words that likely appear when present-ing claims, independent on the topic, including markers of evidence and reasons such as \"if\", \"that\" and \"because\" and similar lexical indicators based on . Contrary to argument words, we define topic words as words that have no relation to the act of presenting an argument but are clearly related to the specific topic, e.g., nouns or verbs frequently used when debating or merely describing the topic. Lastly, we define stance words as opinionated words that express a stance toward a topic (but is not only used in the context of arguments, i.e., presenting evidence). Examples include describing death penalty as \"murder\" or school uniforms as \"uncomfortable\". Three annotators agreed on the classification. Words that did not fit our scheme were categorised as other. Table 2 shows the top 20 words, categorised, for all development sets. 6 Our first observation is that 62.5% of the top 20 words are topic words, and for the GUN CON-TROL topic, none of the words are argument words. Instead, topic words such as \"criminals\", \"background\" and \"checks\" receive high weights. These words are neither indicative of an argument or stance -hence, they are spurious correlations. Interestingly, the only topic where argument words is the majority category is cloning -the held-out topic where all our models perform best. This suggests reducing our models' reliance on topic words can improve the cross-topic performance of argument mining models, which we will investigate in the following experiments. Of course, our models, nevertheless, show relatively good performance across topics, suggesting that some topic words transfer across topics in the UKP corpus. We will discuss recommendations for experimental protocols and the importance of evaluating across distant topics below.",
"cite_spans": [
{
"start": 1085,
"end": 1086,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1014,
"end": 1021,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Note that we do not normalize the accumulated LIME weights by word frequency, which favors frequent words. When normalising the weights, our models also rely heavily on low-frequency stance words and for all topics, except cloning, there are many topic words among the top 20. Highfrequency words (as well as most argument words) are naturally ranked much lower after normalisation. Stance words are, of course, not spurious for our three-way classification problem, but a near dis- 6 Top 20 words along with their frequency and LIME weights are provided at github.com/terne/ spurious_correlations_in_argmin/top_ words appearance of argument words in the normalized top 20 suggests our models are unlikely to capture low-frequency argument markers. c) How much weight do our models attribute to open class words, and does multi-task learning move emphasis to closed-class items? Multitask learning is a regularization technique (S\u00f8gaard and Goldberg, 2016; Liu et al., 2019) and may, as suggested by Tu et al. (2020) , reduce the extent to which our models rely on spurious correlations, which tend to be open class words. To compare the weight attributed to open-class words, across single-task and multi-task models, we define a score reflecting the weight put on open class words in a sentence: For each word in the sentence, we consider the maximum LIME weight of the two weights towards the argument classes ARGUMENT AGAINST and ARGUMENT FOR. We then take the sum of LIME weights put on open class words, normalised by the total sum of weights, and divide the normalised weight by the sentence fraction of open-class words. Table 3 shows the average sentence scores for each topic and model. We observe that the weights are very similar across single-task and multi-task models (and topics), and a Wilcoxon signed-rank test confirms that there is no significant difference between single-task and multi-task open class sentence scores. We also performed the test with sentence scores defined for each class separately (rather than taking the maximum weight) and again found no significant differences. , because, proves, however, shows, result, opinion, conclusion, given, accordingly, since, clearly, mean, truth, consequently, must, would, points, therefore, whereas, obvious, demonstrates, thus, fact, if, that, hence, i, could, should, for, contrary, potential, may, believe, suggests, probable, conclude, clear, point, sum, entails, think, implies, explanation, follows, reason Shared open political, single, debate, had, asked, made, policy, last, legal, cause, long, few, said, want, person, issue, say, group, possible, use, people, believe, good, have, fact, point, society, time, such, going, put, used, come, based, question, think, example, part, other, are, year, including, argument, only, way, effects, go, many, support, more, several, end, has, day, see, need, make, get, means, public, is, high, help , money, find, found, same words indicative of arguments, we use the claim indicator list provided in the appendix for the Vaccination Corpus' annotation guideline (Morante et al., 2020) , which is in turn based on . We simplify the indicators to unigrams and combine the set with a few additions from Young's Critical Thinking course website; see Table 4 . For each held-out topic, we compute the average LIME weight of each claim indicator. Figure 3 shows a boxplot with these averages across single-task and multi-task models. We test for significance using the Wilcoxon signedrank test. Argument words are weighted significantly higher in the two argument classes compared to NO ARGUMENT, at the 0.01 significance level, as would be expected. With ARGUMENT AGAINST, we find significantly higher weights attributed to argument words by the multi-task models. However, with ARGUMENT FOR, the opposite scenario is observed. Hence, multi-task learning does not robustly move emphasis to claim indicators. Moreover, when normalising the weights by frequency before averaging, the significant difference between single-task and multi-task in ARGU-MENT FOR disappears. e) Removing spurious features We have seen how our models rely on spurious features such as gun and marijuana. What happens if we remove this? Obviously, removing only such words would require expensive manual annotation (like we did for the top-20 LIME words), but we can do something more aggressive (with high recall), namely to remove all open class words. If a model that relies only on closed-class words exhibits better performance across distant topics than state-of-theart models, this is strong evidence that this model overfits to spurious features. We find significant differences between the weights resulting from a single-task and multi-task model towards the two argument classes AR-GUMENT AGAINST and ARGUMENT FOR at the 5 and 1 percent significance level, respectively. Furthermore, argument words are weighted significantly higher in the two argument classes than in the NO ARGUMENT class, at the 0.01 significance level.",
"cite_spans": [
{
"start": 483,
"end": 484,
"text": "6",
"ref_id": null
},
{
"start": 928,
"end": 956,
"text": "(S\u00f8gaard and Goldberg, 2016;",
"ref_id": "BIBREF31"
},
{
"start": 957,
"end": 974,
"text": "Liu et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 1000,
"end": 1016,
"text": "Tu et al. (2020)",
"ref_id": null
},
{
"start": 2107,
"end": 2923,
"text": ", because, proves, however, shows, result, opinion, conclusion, given, accordingly, since, clearly, mean, truth, consequently, must, would, points, therefore, whereas, obvious, demonstrates, thus, fact, if, that, hence, i, could, should, for, contrary, potential, may, believe, suggests, probable, conclude, clear, point, sum, entails, think, implies, explanation, follows, reason Shared open political, single, debate, had, asked, made, policy, last, legal, cause, long, few, said, want, person, issue, say, group, possible, use, people, believe, good, have, fact, point, society, time, such, going, put, used, come, based, question, think, example, part, other, are, year, including, argument, only, way, effects, go, many, support, more, several, end, has, day, see, need, make, get, means, public, is, high, help",
"ref_id": null
},
{
"start": 3088,
"end": 3110,
"text": "(Morante et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 1629,
"end": 1636,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 3272,
"end": 3279,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 3367,
"end": 3375,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "To this end, we train single-task models (ST-DNN) with all open class words replaced by unknown tokens. We call this model CLOSED. We report macro F 1 on UKP for each held-out topic, as well as an average across topics, in Table 1 . We also train a model with closed-class words and the open class words that are shared across all eight topics. This amounts to 67 open class words, in total; see Table 4 . 7 We include these 67 open class words in CLOSED+SHARED (in Table 1 ) -and find that this small set of words increase the average macro F 1 with 2 percentage points over CLOSED. Another effect of training CLOSED and CLOSED+SHARED models is that the large variance in performance across topics largely disappears.",
"cite_spans": [
{
"start": 406,
"end": 407,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 396,
"end": 403,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 466,
"end": 473,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "To explore whether removing open class words may improve generalization to more distant topics, we test the constrained models on the test sets of VacC and IBM. While the UKP dataset has three classes, the evaluation datasets have two. We, there- 7 It is worth noting that the set of 67 common open class words above reflects that some words common across topics are in fact of an argumentative nature, with verbs such as \"said\", \"find\" and \"found\" that are often used for referencing sources when providing reasons for claims. We inspected common words among the highest-ranking open class words. We found that very few highly weighted words transfer across more than a few topics, e.g. even at the top 200 level, only one word, namely cost, transfer across four, i.e. half, of the topics. Table 5 : ST-DNN and CLOSED+SHARED models are trained solely on the UKP corpus, and we here report these model's performance (macro F1) on the binary, out-of-domain corpora (IBM and VacC). The supervised upper bound is (multi-task) trained on the training data of all four datasets.",
"cite_spans": [
{
"start": 247,
"end": 248,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 791,
"end": 798,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "fore, merge the two argument classes in UKP when evaluating test performance on VacC and IBM. We report the average test score of the eight models (holding out different UKP topics). Results are found in Table 5 along with a single-task model baseline, i.e., the standard ST-DNN model trained on the UKP corpus, as well as the upper bound on performance provided by an MT-DNN model trained on all four datasets, including the two target datasets. The CLOSED+SHARED model -somewhat surprisingly and very encouragingly -performs better than the unconstrained ST-DNN for both test sets (by some margin). This indicates that state-of-the-art argument mining systems overfit to spurious correlations, as well as the need for evaluation on more distant topics.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Feature analysis in argument mining Daxenberger et al. (2017) underline, like us, the challenge of cross-domain generalization in argument mining, finding that models performing best indomain may not be the ones performing best outof-domain, which they argue may in part be due to different notions of claims in the dataset development. Through experiments with different feature groups, such as embeddings, syntax or lexical features, they find lexical clues to be the \"essence\" of claims and that simple rules are important for cross-domain performance. Simple lexical clues are also found to be effective for argument mining in Levy et al. (2018) , who create a claim lexicon, as well as in Lin et al. (2019) who investigate the effectiveness of integrating lexica (a claim lexicon, a sentiment lexicon, an emotion lexicon and the Princeton WordNet 8 ) in the attention mechanism of a BiLSTM, but evaluate this only in the context of in-domain argument mining.",
"cite_spans": [
{
"start": 631,
"end": 649,
"text": "Levy et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 694,
"end": 711,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Feature analysis in deep neural networks Feature analysis in deep neural networks is not straightforward but, by now, several approaches to attribute importance in deep neural networks to features or input tokens are available. One advantage of LIME is that it can be applied to any model posthoc. Other approaches for interpreting transformers, specifically, focus on inspections of the attention weights (Abnar and Zuidema, 2020; Vig, 2019) and vector norms (Kobayashi et al., 2020) .",
"cite_spans": [
{
"start": 406,
"end": 431,
"text": "(Abnar and Zuidema, 2020;",
"ref_id": "BIBREF0"
},
{
"start": 432,
"end": 442,
"text": "Vig, 2019)",
"ref_id": "BIBREF38"
},
{
"start": 460,
"end": 484,
"text": "(Kobayashi et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Landeiro and Culotta (2018) provide a thorough description of spurious correlations deriving from confounding factors in text classification and outline methods from social science of controlling for confounds. However, these methods require the confounding factors to be known, which is often not the case. This problem is tackled by Wang and Culotta (2020) who, in contrast, develop a computational method for distinguishing spurious from genuine correlations in text classification to adjust for the identified spurious features to improve model robustness. They consider spurious correlations in sentiment classification and toxicity detection. McHardy et al. (2019) identified similar problems in sarcasm detection and suggested adversarial training to reduce sensitivity to spurious correlations. Kumar et al. (2019) present a similar method to avoid \"topical confounds\" in native language identification.",
"cite_spans": [
{
"start": 335,
"end": 358,
"text": "Wang and Culotta (2020)",
"ref_id": "BIBREF41"
},
{
"start": 803,
"end": 822,
"text": "Kumar et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious correlations in text classification",
"sec_num": null
},
{
"text": "MTL to regularize spurious correlations Tu et al. (2020) suggest multi-task learning increase robustness to spurious correlations. Multi-task learning has previously been shown to be an effective regularizer (S\u00f8gaard and Goldberg, 2016; Sener and Koltun, 2018) , leading to better generalization to new domains (Cheng et al., 2015; Peng and Dredze, 2017) . Jabbour et al. (2020) , though, presents experiments in automated diagnosis of disease based on chest X-rays suggesting that multi-task learning is not always robust to spurious correlations. In our study, we expected multi-task learning to move emphasis to closed-class items and claim indicators and away from the spurious correlations that do not hold as general markers of claims and arguments across topics and domains. Still, our analysis of feature weights does not indicate that multi-task learning is effective to this end.",
"cite_spans": [
{
"start": 208,
"end": 236,
"text": "(S\u00f8gaard and Goldberg, 2016;",
"ref_id": "BIBREF31"
},
{
"start": 237,
"end": 260,
"text": "Sener and Koltun, 2018)",
"ref_id": "BIBREF28"
},
{
"start": 311,
"end": 331,
"text": "(Cheng et al., 2015;",
"ref_id": "BIBREF5"
},
{
"start": 332,
"end": 354,
"text": "Peng and Dredze, 2017)",
"ref_id": "BIBREF24"
},
{
"start": 357,
"end": 378,
"text": "Jabbour et al. (2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious correlations in text classification",
"sec_num": null
},
{
"text": "We have shown that cross-topic evaluation of argument mining is insufficient to prevent models from relying on spurious features. Many of the spurious correlations that our models rely on are shared across some pairs of UKP topics but fail to generalise to distant topics (IBM and VacC). This shows cross-topic evaluation can encourage learning from signals, rather than spurious features; the problem with the protocol in Stab et al. (2018) is using multiple source topics. When using multiple source topics for training (and if the annotation relies on arguments being related to these topics), the models may overly rely on features that are frequent in debates of these topics but are not related to the forming of an argument and hence do not generalise well to unseen topics. The variance in cross-topic performance may be explained by some topic words transferring across a few topics, since the large variance disappears when removing open-class words. We propose evaluating on more distant held-out topics or simply considering the worst-case performance across all pairs of topics to estimate real-world out-of-topic performance. ",
"cite_spans": [
{
"start": 423,
"end": 441,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spurious correlations in text classification",
"sec_num": null
},
{
"text": "Arjovsky et al. (2019) provides the example of a classifier trained to distinguish between images of cows and camels; if prone to spurious correlations, the classifier may be challenged by a picture of a cow on a sandy beach.Bommasani and Cardie (2020) also refer to spurious correlations as reasoning shortcuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tinyurl.com/y6ldjtvh 4 https://tinyurl.com/yyw5uhtm 5 For LIME, we use a neighbourhood of size 500 both here and in the following experiments. We use models trained with random seed 2018 for the current and following LIME experiments, and for the current analysis, we use models trained with the cloning topic as our held-out topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://wordnet.princeton.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Maria Barrett is supported by a research grant (34437) from VILLUM FONDEN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "(a) This is an argument with the claim as the first sentence. The model has predicted ARGUMENT AGAINST. This makes sense because it is an argument against censorship, with this being the focus of the conclusion.(b) The model has rightly predicted the example as not being an argument. (c) This is an argument with the last sentence as the conclusion. The model incorrectly predicts it as not being an argument.(d) This is not an argument. The model incorrectly predicts it as being an argument against something. This example is not formally an argument because it is formulated as a question. We note that Stab et al. (2018) likewise found questions among false positives in their error analysis.(e) This is an argument with the conclusion as the last sentence. The model correctly predicts it as an argument for something (for stricter controls on the content of entertainment).(f) This is an argument with the conclusion as the last sentence. The model correctly predicts it as an argument for something (for exercise). ",
"cite_spans": [
{
"start": 607,
"end": 625,
"text": "Stab et al. (2018)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Quantifying attention flow in transformers",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Abnar",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4190--4197",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.385"
]
},
"num": null,
"urls": [],
"raw_text": "Samira Abnar and Willem Zuidema. 2020. Quantify- ing attention flow in transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190-4197, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Invariant risk minimization",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Arjovsky",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Lopez-Paz",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Arjovsky, L\u00e9on Bottou, Ishaan Gulrajani, and David Lopez-Paz. 2019. Invariant risk minimization. Cite arxiv:1907.02893.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Intrinsic evaluation of summarization datasets",
"authors": [
{
"first": "Rishi",
"middle": [],
"last": "Bommasani",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8075--8096",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.649"
]
},
"num": null,
"urls": [],
"raw_text": "Rishi Bommasani and Claire Cardie. 2020. Intrinsic evaluation of summarization datasets. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8075-8096, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Qlime-a quadratic local interpretable model-agnostic explanation approach",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bramhall",
"suffix": ""
},
{
"first": "Hayley",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Tieu",
"suffix": ""
},
{
"first": "Nibhrat",
"middle": [],
"last": "Lohia",
"suffix": ""
}
],
"year": 2020,
"venue": "SMU Data Science Review",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bramhall, Hayley Horn, Michael Tieu, and Nibhrat Lohia. 2020. Qlime-a quadratic local interpretable model-agnostic explanation approach. SMU Data Science Review, 3.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "for example, for similar arguments in cross-domain NLP",
"authors": [
{
"first": "See",
"middle": [],
"last": "R\u00fcd",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "See R\u00fcd et al. (2011) or Sultan et al. (2016), for example, for similar arguments in cross-domain NLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Open-domain name error detection using a multitask RNN",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "737--746",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1085"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Cheng, Hao Fang, and Mari Ostendorf. 2015. Open-domain name error detection using a multi- task RNN. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 737-746, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What is the essence of a claim? cross-domain claim identification",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2055--2066",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1218"
]
},
"num": null,
"urls": [],
"raw_text": "Johannes Daxenberger, Steffen Eger, Ivan Habernal, Christian Stab, and Iryna Gurevych. 2017. What is the essence of a claim? cross-domain claim identi- fication. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2055-2066, Copenhagen, Denmark. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On the interpretability of machine learningbased model for predicting hypertension",
"authors": [
{
"first": "Radwa",
"middle": [],
"last": "Elshawi",
"suffix": ""
},
{
"first": "Mouaz",
"middle": [],
"last": "Al-Mallah",
"suffix": ""
},
{
"first": "Sherif",
"middle": [],
"last": "Sakr",
"suffix": ""
}
],
"year": 2019,
"venue": "BMC Med Inform Decis Mak",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radwa Elshawi, Mouaz Al-Mallah, and Sherif Sakr. 2019. On the interpretability of machine learning- based model for predicting hypertension. BMC Med Inform Decis Mak., 19.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "LIME and sampling",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gruber",
"suffix": ""
}
],
"year": 2019,
"venue": "Limitations of Interpretable Machine Learning Methods, chapter 13",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Gruber. 2019. LIME and sampling. In Christoph Molnar, editor, Limitations of Inter- pretable Machine Learning Methods, chapter 13.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep learning applied to chest x-rays: Exploiting and preventing shortcuts",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Jabbour",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Fouhey",
"suffix": ""
},
{
"first": "Ella",
"middle": [],
"last": "Kazerooni",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"W"
],
"last": "Sjoding",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Wiens",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Jabbour, David Fouhey, Ella Kazerooni, Michael W. Sjoding, and Jenna Wiens. 2020. Deep learning applied to chest x-rays: Exploiting and pre- venting shortcuts.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is not only a weight: Analyzing transformers with vector norms",
"authors": [
{
"first": "Goro",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Tatsuki",
"middle": [],
"last": "Kuribayashi",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7057--7075",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.574"
]
},
"num": null,
"urls": [],
"raw_text": "Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Lime and neighborhood",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Kopper",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Kopper. 2019. Lime and neighborhood.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Limitations of Interpretable Machine Learning Methods, chapter 13",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Christoph Molnar, editor, Limitations of Inter- pretable Machine Learning Methods, chapter 13.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Topics to avoid: Demoting latent confounds in text classification",
"authors": [
{
"first": "Sachin",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4153--4163",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1425"
]
},
"num": null,
"urls": [],
"raw_text": "Sachin Kumar, Shuly Wintner, Noah A. Smith, and Yulia Tsvetkov. 2019. Topics to avoid: Demoting latent confounds in text classification. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4153-4163, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Robust text classification under confounding shift",
"authors": [
{
"first": "Virgile",
"middle": [],
"last": "Landeiro",
"suffix": ""
},
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Artificial Intelligence Research",
"volume": "63",
"issue": "",
"pages": "391--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Virgile Landeiro and Aron Culotta. 2018. Robust text classification under confounding shift. Journal of Artificial Intelligence Research, 63:391-419.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Defining locality for surrogates in post-hoc interpretablity",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Laugel",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Renard",
"suffix": ""
},
{
"first": "Marie-Jeanne",
"middle": [],
"last": "Lesot",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Marsala",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Detyniecki",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.07498"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Laugel, Xavier Renard, Marie-Jeanne Lesot, Christophe Marsala, and Marcin Detyniecki. 2018. Defining locality for surrogates in post-hoc inter- pretablity. arXiv preprint arXiv:1806.07498.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Towards an argumentative content search engine using weak supervision",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Bogin",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Gretz",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2066--2081",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ran Levy, Ben Bogin, Shai Gretz, Ranit Aharonov, and Noam Slonim. 2018. Towards an argumentative con- tent search engine using weak supervision. In Pro- ceedings of the 27th International Conference on Computational Linguistics, pages 2066-2081, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Unsupervised corpus-wide claim detection",
"authors": [
{
"first": "Ran",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Gretz",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Sznajder",
"suffix": ""
},
{
"first": "Shay",
"middle": [],
"last": "Hummel",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Aharonov",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Slonim",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "79--84",
"other_ids": {
"DOI": [
"10.18653/v1/W17-5110"
]
},
"num": null,
"urls": [],
"raw_text": "Ran Levy, Shai Gretz, Benjamin Sznajder, Shay Hum- mel, Ranit Aharonov, and Noam Slonim. 2017. Un- supervised corpus-wide claim detection. In Pro- ceedings of the 4th Workshop on Argument Mining, pages 79-84, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Lexicon guided attentive neural network model for argument mining",
"authors": [
{
"first": "Jian-Fu",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Kuo",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "67--73",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4508"
]
},
"num": null,
"urls": [],
"raw_text": "Jian-Fu Lin, Kuo Yu Huang, Hen-Hsen Huang, and Hsin-Hsi Chen. 2019. Lexicon guided attentive neu- ral network model for argument mining. In Pro- ceedings of the 6th Workshop on Argument Mining, pages 67-73, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards assessing argumentation annotation -a first step",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Lindahl",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Borin",
"suffix": ""
},
{
"first": "Jacobo",
"middle": [],
"last": "Rouces",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "177--186",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4520"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Lindahl, Lars Borin, and Jacobo Rouces. 2019. Towards assessing argumentation annotation -a first step. In Proceedings of the 6th Workshop on Argu- ment Mining, pages 177-186, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-task deep neural networks for natural language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4487--4496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4487-4496, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adversarial training for satire detection: Controlling for confounding variables",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Mchardy",
"suffix": ""
},
{
"first": "Heike",
"middle": [],
"last": "Adel",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "660--665",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1069"
]
},
"num": null,
"urls": [],
"raw_text": "Robert McHardy, Heike Adel, and Roman Klinger. 2019. Adversarial training for satire detection: Con- trolling for confounding variables. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 660-665, Minneapolis, Minnesota. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Annotating perspectives on vaccination",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Isa",
"middle": [],
"last": "Chantal Van Son",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Maks",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4964--4973",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante, Chantal van Son, Isa Maks, and Piek Vossen. 2020. Annotating perspectives on vacci- nation. In Proceedings of The 12th Language Re- sources and Evaluation Conference, pages 4964- 4973, Marseille, France. European Language Re- sources Association.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multi-task domain adaptation for sequence tagging",
"authors": [
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "91--100",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2612"
]
},
"num": null,
"urls": [],
"raw_text": "Nanyun Peng and Mark Dredze. 2017. Multi-task do- main adaptation for sequence tagging. In Proceed- ings of the 2nd Workshop on Representation Learn- ing for NLP, pages 91-100, Vancouver, Canada. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. arXiv",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "9821--9822",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and clustering of arguments with contextualized word embeddings. arXiv, page 1906.09821v1.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "why should I trust you?\": Explaining the predictions of any classifier",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should I trust you?\": Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Piggyback: Using search engines for robust cross-domain named entity recognition",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "R\u00fcd",
"suffix": ""
},
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "965--975",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan R\u00fcd, Massimiliano Ciaramita, Jens M\u00fcller, and Hinrich Sch\u00fctze. 2011. Piggyback: Using search en- gines for robust cross-domain named entity recogni- tion. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 965-975, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multi-task learning as multi-objective optimization",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "Sener",
"suffix": ""
},
{
"first": "Vladlen",
"middle": [],
"last": "Koltun",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "31",
"issue": "",
"pages": "527--538",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. In Ad- vances in Neural Information Processing Systems, volume 31, pages 527-538. Curran Associates, Inc.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Documentword co-regularization for semi-supervised sentiment analysis",
"authors": [
{
"first": "Vikas",
"middle": [],
"last": "Sindhwani",
"suffix": ""
},
{
"first": "Prem",
"middle": [],
"last": "Melville",
"suffix": ""
}
],
"year": 2008,
"venue": "ICDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vikas Sindhwani and Prem Melville. 2008. Document- word co-regularization for semi-supervised senti- ment analysis. In ICDM.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Part-of-speech tagging with antagonistic adversaries",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "640--644",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2013. Part-of-speech tagging with an- tagonistic adversaries. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 640-644, Sofia, Bulgaria. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Deep multitask learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "231--235",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2038"
]
},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi- task learning with low level tasks supervised at lower layers. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 231-235, Berlin, Germany. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Parsing argumentation structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "",
"pages": "619--659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43:619-659.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Cross-topic argument mining from heterogeneous sources",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rai",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3664--3674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab, Tristan Miller, Benjamin Schiller, Pranav Rai, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3664-3674.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bayesian supervised domain adaptation for short text similarity",
"authors": [
{
"first": "Jordan",
"middle": [],
"last": "Md Arafat Sultan",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sumner",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "927--936",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1107"
]
},
"num": null,
"urls": [],
"raw_text": "Md Arafat Sultan, Jordan Boyd-Graber, and Tamara Sumner. 2016. Bayesian supervised domain adap- tation for short text similarity. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 927-936, San Diego, California. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Reducing weight undertraining in structured discriminative learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sindelar",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "89--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton, Michael Sindelar, and Andrew McCal- lum. 2006. Reducing weight undertraining in struc- tured discriminative learning. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 89-95, New York City, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Argument mining: Extracting arguments from online dialogue",
"authors": [
{
"first": "Reid",
"middle": [],
"last": "Swanson",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Ecker",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 16th annual meeting of the special interest group on discourse and dialogue",
"volume": "",
"issue": "",
"pages": "217--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reid Swanson, Brian Ecker, and Marilyn Walker. 2015. Argument mining: Extracting arguments from on- line dialogue. In Proceedings of the 16th annual meeting of the special interest group on discourse and dialogue, pages 217-226.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Spandana Gella, and He He. 2020. An empirical study on robustness to spurious correlations using pre-trained language models",
"authors": [
{
"first": "Lifu",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Garima",
"middle": [],
"last": "Lalwani",
"suffix": ""
}
],
"year": null,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "621--633",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00335"
]
},
"num": null,
"urls": [],
"raw_text": "Lifu Tu, Garima Lalwani, Spandana Gella, and He He. 2020. An empirical study on robustness to spuri- ous correlations using pre-trained language models. Transactions of the Association for Computational Linguistics, 8:621-633.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A multiscale visualization of attention in the transformer model",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Vig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3007"
]
},
"num": null,
"urls": [],
"raw_text": "Jesse Vig. 2019. A multiscale visualization of atten- tion in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics: System Demonstrations, pages 37-42, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Decision boundary approximation: A new method for locally explaining predictions of complex classification models",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Vlassopoulos",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios Vlassopoulos. 2019. Decision boundary ap- proximation: A new method for locally explaining predictions of complex classification models. Tech- nical report, University of Leiden.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "A corpus for research on deliberation and debate",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marilyn",
"suffix": ""
},
{
"first": "Jean E Fox",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Tree",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Anand",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Abbott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2012,
"venue": "LREC",
"volume": "12",
"issue": "",
"pages": "812--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn A Walker, Jean E Fox Tree, Pranav Anand, Rob Abbott, and Joseph King. 2012. A corpus for research on deliberation and debate. In LREC, vol- ume 12, pages 812-817. Istanbul.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Identifying spurious correlations for robust text classification",
"authors": [
{
"first": "Zhao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "3431--3440",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.308"
]
},
"num": null,
"urls": [],
"raw_text": "Zhao Wang and Aron Culotta. 2020. Identifying spu- rious correlations for robust text classification. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 3431-3440, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Non-argumentative example sentence (because it is question rather than argument) explained with LIME. The orange highlights indicate words weighted positively towards the ARGUMENT AGAINST class. The darker the colour, the larger the weight. a) using MT-DNN+AQ+IBM+VacC as the predictor. b) using ST-DNN as the predictor. Both models used were trained with the cloning topic held out.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Boxplot of argument word LIME weights with each point representing the topic mean of the argument word weights.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "LIME explanations of the first 12 challenge examples predicted the single-task model. Highlight colours represents weight towards a class; blue: NO ARGUMENT; orange: ARGUMENT AGAINST; green: ARGUMENT FOR. Darker colours means larger weights. LIME explanations of the last seven challenge examples predicted by the single-task model. Highlight colours represents weight towards a class; blue: NO ARGUMENT; orange: ARGUMENT AGAINST; green: ARGU-MENT FOR. Darker colours means larger weights.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"text": "shows the average cross-topic results as well as results for each held-out topic for all models. With single-task mod-DNN .642\u00b1.011 .473\u00b1.012 .715\u00b1.012 .595\u00b1.009 .593\u00b1.011 .703\u00b1.010 .698\u00b1.015 .710\u00b1.013 .650\u00b1.002 MT-DNN+IBM .643\u00b1.009 .466\u00b1.019 .726\u00b1.010 .595\u00b1.006 .582\u00b1.004 .704\u00b1.010 .703\u00b1.010 .718\u00b1.009 .655\u00b1.006 MT-DNN+AQ .643\u00b1.011 .479\u00b1.015 .716\u00b1.006 .600\u00b1.012 .590\u00b1.010 .699\u00b1.011 .710\u00b1.010 .698\u00b1.008 .649\u00b1.015 MT-DNN+VacC .641\u00b1.010 .472\u00b1.016 .716\u00b1.008 .589\u00b1.009 .601\u00b1.009 .701\u00b1.011 .690\u00b1.010 .699\u00b1.013 .660\u00b1.006 MT-DNN+VacC+IBM+AQ .644\u00b1.011 .476\u00b1.009 .720\u00b1.021 .587\u00b1.011 .598\u00b1.005 .716\u00b1.011 .696\u00b1.003 .701\u00b1.018 .655\u00b1.006 CONSTRAINED CROSS-TOPIC MODELS (lower bounds) CLOSED .481\u00b1.014 .472\u00b1.016 .492\u00b1.006 .467\u00b1.013 .452\u00b1.015 .515\u00b1.021 .478\u00b1.012 .520\u00b1.012 .519\u00b1.008 CLOSED+SHARED .501\u00b1.010 .426\u00b1.012 .508\u00b1.016 .475\u00b1.009 .469\u00b1.006 .552\u00b1.004 .490\u00b1.005 .565\u00b1.017 .519\u00b1.008 Table 1: Macro F 1 scores across topics of the three-class UKP data. IN-TOPIC models are (also) trained on the training data of the target topic. CONSTRAINED models only rely on closed-class words and open class words shared across all topics.",
"num": null,
"content": "<table><tr><td>Model</td><td>Average</td><td>abortion</td><td>cloning</td><td>death</td><td>gun</td><td>marijuana</td><td>school</td><td>minimum</td><td>nuclear</td></tr><tr><td/><td/><td/><td/><td>penalty</td><td>control</td><td>legal</td><td>uniforms</td><td>wage</td><td>energy</td></tr><tr><td/><td/><td/><td colspan=\"3\">IN-TOPIC MODELS (upper bounds)</td><td/><td/><td/><td/></tr><tr><td>Topic-MT-DNN \u2020</td><td>.665</td><td>.571</td><td>.733</td><td>.595</td><td>.611</td><td>.724</td><td>.707</td><td>.716</td><td>.662</td></tr><tr><td/><td/><td/><td colspan=\"3\">CROSS-TOPIC MODELS</td><td/><td/><td/><td/></tr><tr><td>ST-</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
},
"TABREF4": {
"text": "The sentence scores reflecting the weight put on open class words across domains and model types. There is no significant difference between mean sentence scores of ST and MT models.",
"num": null,
"content": "<table><tr><td>d) How much weight do our models attribute to</td></tr><tr><td>claim indicators, and does multi-task learning</td></tr><tr><td>move emphasis to such indicators? As a set of</td></tr></table>",
"type_str": "table",
"html": null
},
"TABREF5": {
"text": "Claim indicators (see text) and shared open class words across the UKP topics.",
"num": null,
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}