ACL-OCL / Base_JSON /prefixS /json /spnlp /2021.spnlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:44.284568Z"
},
"title": "Using Hierarchical Class Structure to Improve Fine-Grained Claim Classification",
"authors": [
{
"first": "Erenay",
"middle": [],
"last": "Dayanik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Blessing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Nico",
"middle": [],
"last": "Blokker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bremen",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Haunss",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bremen",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Gabriella",
"middle": [],
"last": "Lapesa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stuttgart",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The analysis of public debates crucially requires the classification of political demands according to hierarchical claim ontologies (e.g. for immigration, a supercategory \"Controlling Migration\" might have subcategories \"Asylum limit\" or \"Border installations\"). A major challenge for automatic claim classification is the large number and low frequency of such subclasses. We address it by jointly predicting pairs of matching super-and subcategories. We operationalize this idea by (a) encoding soft constraints in the claim classifier and (b) imposing hard constraints via Integer Linear Programming. Our experiments with different claim classifiers on a German immigration newspaper corpus show consistent performance increases for joint prediction, in particular for infrequent categories and discuss the complementarity of the two approaches.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The analysis of public debates crucially requires the classification of political demands according to hierarchical claim ontologies (e.g. for immigration, a supercategory \"Controlling Migration\" might have subcategories \"Asylum limit\" or \"Border installations\"). A major challenge for automatic claim classification is the large number and low frequency of such subclasses. We address it by jointly predicting pairs of matching super-and subcategories. We operationalize this idea by (a) encoding soft constraints in the claim classifier and (b) imposing hard constraints via Integer Linear Programming. Our experiments with different claim classifiers on a German immigration newspaper corpus show consistent performance increases for joint prediction, in particular for infrequent categories and discuss the complementarity of the two approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Newspaper articles are an invaluable source for the analysis of public debates. In political science, it is common to manually annotate the articles by identifying claims (text spans which report a demand on a specific policy aspect), assigning them fine-grained claim categories from domain-specific claim ontologies and attributing them to actors (e.g., politicians or parties). Actors and claim categories together can be used to construct expressive discourse networks (Leifeld, 2016) for in-depth analysis of debate structure and dynamics.",
"cite_spans": [
{
"start": 473,
"end": 488,
"text": "(Leifeld, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In line with the trend of applying NLP methods to questions from political science (e.g., Bamman and Smith, 2015; Glava\u0161 et al., 2019) claim classification has been framed as generic text classification (Pad\u00f3 et al., 2019) . That study however addresses only coarse-grained categories and reports mixed results even at that level, with a macro F1 of 46. This is arguably due to the well-known problems of fine-grained classification: The larger the set of classes, the more data would be desirable, while in actuality, the number of instances per class shrinks (Mai et al., 2018; Chang et al., 2020) .",
"cite_spans": [
{
"start": 90,
"end": 113,
"text": "Bamman and Smith, 2015;",
"ref_id": "BIBREF1"
},
{
"start": 114,
"end": 134,
"text": "Glava\u0161 et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 203,
"end": 222,
"text": "(Pad\u00f3 et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 561,
"end": 579,
"text": "(Mai et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 580,
"end": 599,
"text": "Chang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our paper aims at developing practically useful models of fine-grained claim classification. Its main proposal is to exploit the hierarchical nature of claim ontologies by jointly predicting (frequent) supercategories and (informative) subcategories. By enforcing consistency between the levels, the predictions can profit off each other. We experiment with two operationalizations of this idea. The first one, Hierarchical Label Encoding (HLE, Shimaoka et al. (2017a)) introduces \"soft\" constraints through parameter sharing between classes in the classifier. The second one, Integer Linear Programming (ILP, e.g., Punyakanok et al. (2004) ) introduces \"hard\" constraints in a post-processing step.",
"cite_spans": [
{
"start": 616,
"end": 640,
"text": "Punyakanok et al. (2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both methods can be applied to a range of claim classifier architectures. We present experiments with four architectures on a German manually annotated corpus from the so-called refugee crisis in Germany in 2015. We answer the following questions: Do HLE and ILP improve the performance in our experimental setup? (Yes.) Is there complementarity between them? (Yes.) Does the effect depend on the underlying architectures. (Broadly, no.) What types of classes is the improvement most pronounced for. (Low-frequency ones.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experiments are conducted on an extended version of the DebateNet-migr15 (Lapesa et al., 2020 Modified corpus Code f low mid high 1xx 994 2 3 7 2xx 726 4 4 4 3xx 470 1 3 2 4xx 229 3 3 0 5xx 686 3 2 3 6xx 192 3 2 0 7xx 744 4 3 5 8xx 667 5 3 3 (b) cal science experts for the migration domain. The corpus contains 3827 annotated textual spans, each of which is assigned one or more categories from the claim ontology described below: spans can be assigned multiple categories when the statements touch on more than one policy issue.",
"cite_spans": [
{
"start": 77,
"end": 97,
"text": "(Lapesa et al., 2020",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 114,
"end": 277,
"text": "Code f low mid high 1xx 994 2 3 7 2xx 726 4 4 4 3xx 470 1 3 2 4xx 229 3 3 0 5xx 686 3 2 3 6xx 192 3 2 0 7xx 744 4 3 5 8xx 667 5 3 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset and Claim Ontology",
"sec_num": "2"
},
{
"text": "Claim ontology Policy debates are inherently complex, as a reflection of the complexity of the problems which the policy addresses: in our case, control of migration, but also integration of refugees, foreign policy, etc. In our case, the claim ontology consists of 100 subcategories which are grouped into 8 supercategories (cf. Table 1a ). For example, 'border controls' and 'quota for refugees' are subcategories of the supercategory 'migration control'. The fine-grained annotation is crucial to build a satisfactory picture of a policy debate: what we are interested in is the position of certain politicians with respect to specific policy aspects over time (i.e., being in favor or against refugee quotas), while the supercategories are not expressive enough for the analysis of the debate itself. At the same time, Table 1a shows the drop in frequency between supercategories (in the hundreds) and subcategories (in the tens), with pronounced differences between categories, resulting in a clear modeling challenge. We return to this point in Section 5.",
"cite_spans": [],
"ref_spans": [
{
"start": 330,
"end": 338,
"text": "Table 1a",
"ref_id": "TABREF1"
},
{
"start": 823,
"end": 831,
"text": "Table 1a",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset and Claim Ontology",
"sec_num": "2"
},
{
"text": "Given the properties described above, we model claim classification as multi-label classification. We follow previous work on coarse-grained claim classification (Pad\u00f3 et al., 2019) in comparing a set of neural models, ranging from baselines to stateof-the-art architectures. All models are trained using cross entropy loss with the sigmoid activation function. All models except BERT use custom Fast-Text (Bojanowski et al., 2017) word embeddings pretrained on a German newswire corpus. 2 LSTM This model passes the input through a single-layer LSTM. The final hidden state is used as input to a fully connected layer.",
"cite_spans": [
{
"start": 162,
"end": 181,
"text": "(Pad\u00f3 et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 406,
"end": 431,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Claim Classification",
"sec_num": "3"
},
{
"text": "BiLSTM A single-layer Bidirectional LSTM (Graves et al., 2013) traverses the input. The final hidden states in both directions are concatenated and fed to a fully connected layer.",
"cite_spans": [
{
"start": 41,
"end": 62,
"text": "(Graves et al., 2013)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Claim Classification",
"sec_num": "3"
},
{
"text": "BiLSTM+Attention This model combines the BiLSTM architecture with the attention mechanism described in Shimaoka et al. (2017a) . The input is fed to a single-layer BiLSTM. Then, the attention-weighted sum of the hidden states corresponding to the input sequence is fed to a fully connected layer.",
"cite_spans": [
{
"start": 103,
"end": 126,
"text": "Shimaoka et al. (2017a)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Claim Classification",
"sec_num": "3"
},
{
"text": "BERT This is a pretrained BERT (Devlin et al., 2019) model trained solely on German corpora 3 and a fully connected layer which is trained while the BERT encoder is fine-tuned. After each input is encoded, we use the final hidden state of the first token, corresponding to the special token [CLS] , as the contextualized representation of the input which serves as input to a fully connected layer.",
"cite_spans": [
{
"start": 31,
"end": 52,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 291,
"end": 296,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Claim Classification",
"sec_num": "3"
},
{
"text": "The obvious shortcoming of the model architectures sketched above is that they make the standard assumption of class independence -even though we know that the classes in claim classification are related. We therefore build on the idea that we can label all documents with both sub-and supercategories during training time, and then encourage the model to jointly predict categories at both levels so that these predictions are consistent with one another. The expectation is that this creates an incentive to learn better representations for the fine-grained classes. We now sketch two generally applicable methods that implement this idea.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "Hierarchical Label Encoding (HLE). The idea behind this approach is to inject the inference relation between sub-and supercategories into the representation learning process. Following Shimaoka et al. 2017a, we create a binary square matrix, S \u2208 {0, 1} l\u00d7l , where l is the number of claim classes in dataset. Each cell in the matrix is filled with 1 either if the column class is subclass of or same as the row class, and filled with 0 otherwise. The matrix S is not updated during training and integrated into models by multiplying it by the weight matrix W of the final fully connected layer of each model: p(y = 1) = sigm(h(W S) ) where W \u2208 R l\u00d7hs , h \u2208 R 1\u00d7hs , |y| = l, and hs is the size of the hidden state of (Bi)LSTM or BERT. HLE introduces parameter sharing between classes in the same hierarchy (e.g. 100 and 101), but does not guarantee that the prediction output contains both a super-and a subcategory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "Integer Linear Programming (ILP). ILP has been applied to enforce linguistically motivated constraints on predicted structures such as semantic roles (Punyakanok et al., 2004) , dependency parsing (Riedel and Clarke, 2006) , or entailment graphs (Berant et al., 2011) . Formally, an integer linear program is an optimization problem over a set of integer variables x, given a linear objective function with a set of coefficients c and a set of linear inequality (and equality) constraints (Schrijver, 1984) :",
"cite_spans": [
{
"start": 150,
"end": 175,
"text": "(Punyakanok et al., 2004)",
"ref_id": "BIBREF15"
},
{
"start": 197,
"end": 222,
"text": "(Riedel and Clarke, 2006)",
"ref_id": "BIBREF16"
},
{
"start": 246,
"end": 267,
"text": "(Berant et al., 2011)",
"ref_id": "BIBREF2"
},
{
"start": 489,
"end": 506,
"text": "(Schrijver, 1984)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "max c x so that Ax \u2265 b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "We use ILP to select the most likely legal output from the probabilities estimated by the classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "Legal outputs are those where (a) for each predicted subcategory, the matching supercategory is predicted, and (b) for each predicted supercategory, at least one matching subcategory is predicted. We introduce a binary variable x i for each supercategory and subcategory in the claim ontology, indicating whether this class is being predicted. This makes our task a binary optimization problem, a subclass of ILP. The coefficients c are given by the probability estimates of the neural claim classifiers (NCCs):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "c i = P NCC (x i = 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "The objective function is the log likelihood of the complete model output, including both predicted and non-predicted classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "i log c i x i + log[1 \u2212 c i ](1 \u2212 x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "The first constraints we impose on the solution is that each predicted subcategory must be accompanied by the matching supercategory. Let sup(i) denote the supercategory for the subcategory i. Then this constraint can be formalized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "for each subcategory x i : x i \u2212 x sup(i) \u2264 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "The second constraint is that each predicted supercategory is accompanied by at least one if its subcategories. Let subs(i) denote the set of subcategories for supercategory i. The constraint is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "for each supercategory x i : x i \u2212 j\u2208subs(i) x j \u2264 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "ILP has a complementary profile to HLE in enforcing hard constraints on the output, without propagating the errors back to representation learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrating Hierarchical Class Structure",
"sec_num": "4"
},
{
"text": "Setup. We remove very infrequent subcategories in the dataset by applying a threshold of 20 instances. Smaller categories are merged with the preexisting subcategory x99, which exists for each supercategory as a 'catch-all' category for outlier cases. After filtering, there are 8 super-and 72 subcategories left in the dataset (cf. Table 1b) . We experiment with four model variations: Plain (base claim classifiers as in Section 3); ILP and HLE as described in Section 4; and ILP+HLE. We split our dataset to train (90%) test (10%) splits and run the experiments on our own cluster with two Nvidia GeForce 1080GTX Ti GPUs. For each experiment, we perform grid search guided by cross-validation on the training set to find the best hyperparameters. We report Precision, Recall and F1 scores weighted over all subcategories.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 342,
"text": "Table 1b)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "Main Results. Table 2 summarizes the results of our experiments. In the 'plain' setting, LSTM and BiLSTM perform significantly worse than BiL-STM+Attention and BERT. This finding is consistent with the generally observed benefit of attention and previous results by Pad\u00f3 et al. (2019) .",
"cite_spans": [
{
"start": 266,
"end": 284,
"text": "Pad\u00f3 et al. (2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "The addition of ILP (2nd column) leads to inconsistent changes in precision but always yields better Recall and F-Scores. LSTM and BiLSTM still perform significantly worse than the other two models. When we switch to HLE, all metrics for all models are boosted significantly, showing that parameter sharing via the super/sub-category co-occurrence matrix is a successful across the board. We observe the largest improvement for BERT, where HLE yields an improvement of 12 points in F1, and leads to the overall highest Precision (0.75).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "The last column (HLE + ILP) shows a substantial complementarity of the two methods: models consistently improve over both the HLE only and ILP only setting. Specifically, HLE+ILP models achieves better Recall scores than HLE models (+7 points on average) and better Precision (+8 points on average) scores than ILP models. The effect is least pronounced for the best architecture (BERT); nevertheless, BERT with HLE and ILP achieves the overall highest Recall (0.59) and F-Score (0.60), corresponding to an improvement of 13 points F1 compared to the 'plain' version. The fact that the F1 boost is fueled mainly by Recall is particularly promising because optimizing for Recall is the best strategy when NLP tools are employed in semiautomatic annotation (Ganchev et al., 2007; Ambati et al., 2011) .",
"cite_spans": [
{
"start": 755,
"end": 777,
"text": "(Ganchev et al., 2007;",
"ref_id": "BIBREF6"
},
{
"start": 778,
"end": 798,
"text": "Ambati et al., 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "Frequency Band Analysis. As discussed in the introductory section, fine-grained classification struggles in particular with infrequent classes. We therefore ask how hierarchical class structure affects performance in relation to frequency. To do so, we analyze the performance of the best architecture (BERT), splitting the fine-grained categories into three equal-sized frequency bands. 4 The results in Table 3 show that the prediction quality of plain BERT differs significantly across frequency bands. It fails badly in the low freq band (F1=0.1) while doing a fair job in the mid and high bands (F1=0.42 and 0.57, respectively). Again, we see consistent improvements for both ILP and HLE, but the improvements are more substantial for HLE, in particular for the low-freq band (+27 point F1). Combining HLE and ILP further increases Recall, but reduces Precision somewhat. 5 In sum, we observe that both ILP and HLE improve fine-grained classification. The parameter sharing introduced by HLE particularly helps the lowest-frequency categories and increases both Precision and Recall. ILP generally boosts Recall by enforcing that both super-and a subcategories need to be predicted. There appears to be a midfrequency \"sweet spot\" where this is particularly effective: Less frequent, and the probability estimates are not reliable enough; more frequent, the Precision-Recall trade-off is not worth it.",
"cite_spans": [
{
"start": 388,
"end": 389,
"text": "4",
"ref_id": null
},
{
"start": 877,
"end": 878,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 405,
"end": 412,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "Qualitative Considerations. Finally, we investigate which subcategories benefit most from HLE and ILP in our best model (BERT). Table 4 again shows complementarity between HLE and ILP, indicating that a better combination of the two methods could lead to further improvements. HLE+ILP overlaps largely with HLE, mirroring the larger impact of HLE. Analysis of these classes shows that they belong to the mid and low frequency bands.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "However, not all low and mid frequency classes profit equally. To explain this, we note that the finegrained classes in the migration ontology differ substantially with regard to concreteness: While the high-level category 'Foreign policy' (5xx) contains relatively concrete sub-categories ('Enforcing Dublin III regulations' or 'Expanding the list of safe countries of origin'), the supercategory 'Society' (7xx) mostly consists of less manifest policy measures ('Uphold Human Rights', 'Oppose Xenophobia'). With regard to that distinction, the highest-gain subcategories are of the concrete kind (cf. Table 1): 106 ('Border defence'), 303 ('Forced integration'), 801 ('Constitutional law'), 807 ('Reducing bureaucracy'), 405 ('Counterterrorism'). Conversely, we do not find any subcategories of the less concrete supercategory 700 ('Society'). Table 3 : Detailed results for BERT architecture: break down by frequency bands of fine-grained classes (highest F1 score for each frequency band bolded).",
"cite_spans": [],
"ref_spans": [
{
"start": 846,
"end": 853,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "Highest Improvement ILP 204, 499, 507, 508, 803 HLE 106, 303, 314, 801, 807 ILP+HLE 106, 303, 405, 801, 807 ",
"cite_spans": [
{
"start": 20,
"end": 28,
"text": "ILP 204,",
"ref_id": null
},
{
"start": 29,
"end": 33,
"text": "499,",
"ref_id": null
},
{
"start": 34,
"end": 38,
"text": "507,",
"ref_id": null
},
{
"start": 39,
"end": 43,
"text": "508,",
"ref_id": null
},
{
"start": 44,
"end": 56,
"text": "803 HLE 106,",
"ref_id": null
},
{
"start": 57,
"end": 61,
"text": "303,",
"ref_id": null
},
{
"start": 62,
"end": 66,
"text": "314,",
"ref_id": null
},
{
"start": 67,
"end": 71,
"text": "801,",
"ref_id": null
},
{
"start": 72,
"end": 88,
"text": "807 ILP+HLE 106,",
"ref_id": null
},
{
"start": 89,
"end": 93,
"text": "303,",
"ref_id": null
},
{
"start": 94,
"end": 98,
"text": "405,",
"ref_id": null
},
{
"start": 99,
"end": 103,
"text": "801,",
"ref_id": null
},
{
"start": 104,
"end": 107,
"text": "807",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": null
},
{
"text": "This paper has identified automatic fine-grained claim classification as a crucial, but underaddressed, component of political discourse analysis. We have demonstrated that hierarchical class structure can be exploited to lift fine-grained claim classification to a usable level, showing robust improvements even for transformer architectures and in particular for low-frequency claim categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Addressing the low-frequency issue is particularly relevant in the broader context of the goals of political science. Political discourse unfolds over time, and every prominent issue starts out as infrequent. The true dynamics of debates can only be captured if the classifiers are able to pick up the less salient categories (Koopmans and Statham, 1999; Kossinets, 2006) . Future work involves investigating these concerns on a wider range of datasets, as well as evaluating fine-grained claim classification for semi-automatic discourse network construction. Table 5 presents Precision, Recall and F1 scores of models broken down for low, mid and high frequency bands. We observe similar patterns with other three models: (1) Prediction quality of models in plain setting differ significantly across frequency bands and all three models perform significantly worse on low frequency band and (2) Extending models with HLE and ILP leads to significantly better F1 scores on all frequency bands.",
"cite_spans": [
{
"start": 326,
"end": 354,
"text": "(Koopmans and Statham, 1999;",
"ref_id": "BIBREF9"
},
{
"start": 355,
"end": 371,
"text": "Kossinets, 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 561,
"end": 568,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We calculate Spearman's correlation coefficient in order to investigate the relationship between amount of available data for each subcategory and performance change of BERT model across settings further. For that, we measure the difference between BERT model's subcategory performances in plan and other settings as well as amount of data available for each subcategory. Table 6 shows which subcategory belongs to which frequency band and Table 7 shows Spearman's correlation coefficients. We observe high negative values almost always indicating that there is a strong negative correlation between the amount of data exist for a subcategory and amount of change in performance which means that infrequent classes gain most from ILP and HLE. Table 7 : Spearman's correlation coefficient results between change in evaluation metrics and subcategory size for BERT model.",
"cite_spans": [],
"ref_spans": [
{
"start": 372,
"end": 379,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 440,
"end": 447,
"text": "Table 7",
"ref_id": null
},
{
"start": 743,
"end": 750,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Results Details: Correlation Analyses",
"sec_num": null
},
{
"text": "In the LSTM model, we set the number of hidden units to 500. We train 300-dimensional Fast-Text word embeddings on a corpus consisting of German Newspapers and use them as the input to LSTM. We use Adam with learning rate of 0.003 as optimizer. Batch size and number of epochs are set to 16 and 25 respectively. In the BiLSTM model, we the set number of units to 500 in each direction and batch size to 16. The same 300-dimensional word embeddings as in the LSTM are used. The model is trained with Adam optimizer and a learning rate of 0.003 for 25 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Training Details",
"sec_num": null
},
{
"text": "In the BiLSTM+Attn model, we used the attention mechanism variant described in Shimaoka et al. (2017b) . We set number of units to 500 in each direction and batch size to 16. We use the same 300-dimensional word embeddings used in LSTM and BiLSTM models, and train model for 20 epochs using Adam optimizer with learning rate of 0.003. For the BERT model, we use a cased BERT variant 6 that was trained specifically for German with default parameters for the number of attention heads, hidden layers, and the number of hidden units are 12, 12, and 768, respectively. During finetuning, we use the Adam optimizer with learning rates of 5e-5, \u03b2 1 = 0.9, \u03b2 2 = 0.999, and set the maximum sequence length to 200, batch size to 16 and norm of maximum gradient to 1.0 and trained for 20 epochs. Table 8 and Table 9 show the number of parameters in each model and average time required to Parameter Numbers Plain HLE LSTM 4,731,080 4,731,500 BiLSTM 6,375,080 6,376,000 BiLSTM Att 6, 475, 180 6, 476, 100 BERT 109, 142, 864 109, 143, 552 Hyperparameter search details We perform grid search for hyperparameter optimization and use the hyperparameters leading highest average F1 score during 5-Fold cross validation. Following lower and upper bounds have been applied during search for each hyperparameter: learning Rate [1e-4, 5e-2], epoch: [5, 25] , batch size: [16, 32] . Figure 1 depicts the number of instances for each category. ",
"cite_spans": [
{
"start": 79,
"end": 102,
"text": "Shimaoka et al. (2017b)",
"ref_id": "BIBREF19"
},
{
"start": 982,
"end": 984,
"text": "6,",
"ref_id": null
},
{
"start": 985,
"end": 989,
"text": "475,",
"ref_id": null
},
{
"start": 990,
"end": 996,
"text": "180 6,",
"ref_id": null
},
{
"start": 997,
"end": 1001,
"text": "476,",
"ref_id": null
},
{
"start": 1002,
"end": 1015,
"text": "100 BERT 109,",
"ref_id": null
},
{
"start": 1016,
"end": 1020,
"text": "142,",
"ref_id": null
},
{
"start": 1021,
"end": 1029,
"text": "864 109,",
"ref_id": null
},
{
"start": 1030,
"end": 1034,
"text": "143,",
"ref_id": null
},
{
"start": 1035,
"end": 1038,
"text": "552",
"ref_id": null
},
{
"start": 1342,
"end": 1345,
"text": "[5,",
"ref_id": null
},
{
"start": 1346,
"end": 1349,
"text": "25]",
"ref_id": null
},
{
"start": 1364,
"end": 1368,
"text": "[16,",
"ref_id": null
},
{
"start": 1369,
"end": 1372,
"text": "32]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 788,
"end": 795,
"text": "Table 8",
"ref_id": "TABREF9"
},
{
"start": 800,
"end": 807,
"text": "Table 9",
"ref_id": "TABREF10"
},
{
"start": 881,
"end": 981,
"text": "Parameter Numbers Plain HLE LSTM 4,731,080 4,731,500 BiLSTM 6,375,080 6,376,000 BiLSTM Att",
"ref_id": "TABREF1"
},
{
"start": 1375,
"end": 1383,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "C Training Details",
"sec_num": null
},
{
"text": "For details on the availability of the dataset and code used in our experiments, see mardy-spp.github.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Further details regarding the architecture and training parameters can be found in the appendix.3 https://deepset.ai/german-bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Thresholds: high-frequency (265\u2265f\u2265 67), mid-frequency (65\u2265f\u2265 40) and low-frequency (20\u2265f\u2265 39). Complete lists of the categories in the frequency bands and detailed results of other models are available inTable 5 and Table 6in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We confirmed the relationship between frequency and performance with a correlation analysis to rule out a binning artifact. SeeTable 7in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://deepset.ai/german-bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We acknowledge funding by Deutsche Forschungsgemeinschaft (DFG) through MARDY (Modeling Argumentation Dynamics) within SPP RA-TIO and by Bundesministerium f\u00fcr Bildung und Forschung (BMBF) through E-DELIB (Powering up e-deliberation: towards AI-supported moderation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Bands for all Architectures",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Results Details: Results by Frequency",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Error detection for treebank validation",
"authors": [
{
"first": "Bharat",
"middle": [
"Ram"
],
"last": "Ambati",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Mridul",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Samar",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Dipti Misra",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 9th Workshop on Asian Language Resources",
"volume": "",
"issue": "",
"pages": "23--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bharat Ram Ambati, Rahul Agarwal, Mridul Gupta, Samar Husain, and Dipti Misra Sharma. 2011. Er- ror detection for treebank validation. In Proceedings of the 9th Workshop on Asian Language Resources, pages 23-30, Chiang Mai, Thailand. Asian Federa- tion of Natural Language Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Open extraction of fine-grained political statements",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "76--85",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1008"
]
},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A. Smith. 2015. Open extraction of fine-grained political statements. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 76- 85, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 610-619, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "2020. Taming pretrained transformers for extreme multi-label text classification",
"authors": [
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hsiang-Fu",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Inderjit",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery; Data Mining, KDD '20",
"volume": "",
"issue": "",
"pages": "3163--3171",
"other_ids": {
"DOI": [
"10.1145/3394486.3403368"
]
},
"num": null,
"urls": [],
"raw_text": "Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yim- ing Yang, and Inderjit S. Dhillon. 2020. Tam- ing pretrained transformers for extreme multi-label text classification. In Proceedings of the 26th ACM SIGKDD International Conference on Knowl- edge Discovery; Data Mining, KDD '20, page 3163-3171, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semiautomated named entity annotation",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Mandel",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "53--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Fernando Pereira, Mark Mandel, Steven Carroll, and Peter White. 2007. Semi- automated named entity annotation. In Proceedings of the Linguistic Annotation Workshop, pages 53-56, Prague, Czech Republic. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computational analysis of political texts: Bridging research efforts across communities",
"authors": [
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Nanni",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "18--23",
"other_ids": {
"DOI": [
"10.18653/v1/P19-4004"
]
},
"num": null,
"urls": [],
"raw_text": "Goran Glava\u0161, Federico Nanni, and Simone Paolo Ponzetto. 2019. Computational analysis of political texts: Bridging research efforts across communities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, pages 18-23, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hybrid speech recognition with deep bidirectional LSTM",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Abdel-Rahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
}
],
"year": 2013,
"venue": "2013 IEEE workshop on automatic speech recognition and understanding",
"volume": "",
"issue": "",
"pages": "273--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Navdeep Jaitly, and Abdel-rahman Mo- hamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In 2013 IEEE workshop on automatic speech recognition and understanding, pages 273-278. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Political Claims Analysis: Integrating Protest Event and Political Discourse Approaches. Mobilization: An International Quarterly",
"authors": [
{
"first": "Ruud",
"middle": [],
"last": "Koopmans",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Statham",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "4",
"issue": "",
"pages": "203--221",
"other_ids": {
"DOI": [
"10.17813/maiq.4.2.d7593370607l6756"
]
},
"num": null,
"urls": [],
"raw_text": "Ruud Koopmans and Paul Statham. 1999. Political Claims Analysis: Integrating Protest Event and Po- litical Discourse Approaches. Mobilization: An In- ternational Quarterly, 4(2):203-221.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Effects of missing data in social networks",
"authors": [
{
"first": "Gueorgi",
"middle": [],
"last": "Kossinets",
"suffix": ""
}
],
"year": 2006,
"venue": "Social Networks",
"volume": "28",
"issue": "3",
"pages": "247--268",
"other_ids": {
"DOI": [
"10.1016/j.socnet.2005.07.002"
]
},
"num": null,
"urls": [],
"raw_text": "Gueorgi Kossinets. 2006. Effects of missing data in social networks. Social Networks, 28(3):247-268.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "DEbateNet-mig15: Tracing the 2015 immigration debate in Germany over time",
"authors": [
{
"first": "Gabriella",
"middle": [],
"last": "Lapesa",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Blessing",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Blokker",
"suffix": ""
},
{
"first": "Erenay",
"middle": [],
"last": "Dayanik",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Haunss",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "919--927",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriella Lapesa, Andre Blessing, Nico Blokker, Er- enay Dayanik, Sebastian Haunss, Jonas Kuhn, and Sebastian Pad\u00f3. 2020. DEbateNet-mig15: Tracing the 2015 immigration debate in Germany over time. In Proceedings of LREC, pages 919-927, Online.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Policy Debates as Dynamic Networks: German Pension Politics and Privatization Discourse. Campus Verlag",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Leifeld",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Leifeld. 2016. Policy Debates as Dynamic Net- works: German Pension Politics and Privatization Discourse. Campus Verlag, Frankfurt/New York.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An empirical study on fine-grained named entity recognition",
"authors": [
{
"first": "Khai",
"middle": [],
"last": "Mai",
"suffix": ""
},
{
"first": "Thai-Hoang",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Minh",
"middle": [
"Trung"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Tuan",
"middle": [
"Duc"
],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Ryohei",
"middle": [],
"last": "Sasano",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "711--722",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, and Satoshi Sekine. 2018. An empirical study on fine-grained named entity recognition. In Proceedings of the 27th International Conference on Computational Linguistics, pages 711-722, Santa Fe, New Mexico, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Who sides with whom? towards computational construction of discourse networks for political debates",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Andre",
"middle": [],
"last": "Blessing",
"suffix": ""
},
{
"first": "Nico",
"middle": [],
"last": "Blokker",
"suffix": ""
},
{
"first": "Erenay",
"middle": [],
"last": "Dayanik",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Haunss",
"suffix": ""
},
{
"first": "Jonas",
"middle": [],
"last": "Kuhn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2841--2847",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1273"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Pad\u00f3, Andre Blessing, Nico Blokker, Ere- nay Dayanik, Sebastian Haunss, and Jonas Kuhn. 2019. Who sides with whom? towards computa- tional construction of discourse networks for politi- cal debates. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 2841-2847, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semantic role labeling via integer linear programming inference",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Dav",
"middle": [],
"last": "Zimak",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1346--1352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics, pages 1346-1352, Geneva, Switzerland. COLING.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Incremental integer linear programming for non-projective dependency parsing",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "129--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel and James Clarke. 2006. Incremen- tal integer linear programming for non-projective de- pendency parsing. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing, pages 129-137, Sydney, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Linear and Integer Programming",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Schrijver",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Schrijver. 1984. Linear and Integer Pro- gramming. John Wiley & Sons, New York.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural architectures for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1271--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017a. Neural architectures for fine-grained entity type classification. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1271-1280, Valencia, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural architectures for fine-grained entity type classification",
"authors": [
{
"first": "Sonse",
"middle": [],
"last": "Shimaoka",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1271--1280",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2017b. Neural architectures for fine-grained entity type classification. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1271-1280, Valencia, Spain. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Claim distribution of subcategories. Green dotted line: boundary between high and mid frequency bands. Dark blue line: boundary between low and mid bands.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: (a): Claim distribution by supercategories: Code; Label; frequency (f ); number of subcategories (n.sub);</td></tr><tr><td>mean subcategory frequency with SD (mean f.sub). (b): Claim distribution for each supercategory after very</td></tr><tr><td>infrequent classes are merged. low/mid/high represents the distribution of subcategory frequencies.</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Freq band</td><td/><td>plain</td><td/><td/><td>ILP</td><td/><td/><td>HLE</td><td/><td colspan=\"2\">HLE + ILP</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td colspan=\"13\">Low freq 0.10 0.10 0.10 0.18 0.14 0.15 0.58 0.31 0.37 0.48 0.31 0.35</td></tr><tr><td>Mid freq</td><td colspan=\"12\">0.58 0.36 0.42 0.65 0.47 0.50 0.77 0.55 0.62 0.71 0.63 0.65</td></tr><tr><td colspan=\"13\">High freq 0.73 0.51 0.57 0.60 0.58 0.58 0.78 0.56 0.62 0.67 0.63 0.64</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Test results (weighted averages for fine-grained claim classification) for four architectures and two methods to integrate class structure (integer linear programming, hierarchical label encoding). Best results bolded."
},
"TABREF4": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Subcategories that gain most in F1 score"
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Detail results for all architectures by frequency band"
},
"TABREF8": {
"num": null,
"content": "<table><tr><td/><td>PAIR</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>Plain -ILP</td><td colspan=\"3\">-0.20 -0.10 -0.20</td></tr><tr><td>BERT</td><td>Plain -HLE</td><td colspan=\"3\">-0.24 -0.31 -0.29</td></tr><tr><td/><td colspan=\"4\">Plain -(ILP+HLE) -0.28 -0.29 -0.32</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Lists of the categories in the frequency bands"
},
"TABREF9": {
"num": null,
"content": "<table><tr><td/><td>Runtime (in Minutes)</td></tr><tr><td>LSTM</td><td>1.5</td></tr><tr><td>BiLSTM</td><td>2.2</td></tr><tr><td>BiLSTM Att</td><td>4.5</td></tr><tr><td>BERT</td><td>32.0</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Number of parameters in each model."
},
"TABREF10": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Average runtime required to train each model train each model used in our experiments respectively."
}
}
}
}