ACL-OCL / Base_JSON /prefixN /json /nlppower /2022.nlppower-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:30:57.520957Z"
},
"title": "Why only Micro-F 1 ? Class Weighting of Measures for Relation Classification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Harbecke",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Leonhard",
"middle": [],
"last": "Hennig",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Alt",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Humboldt Universit\u00e4t zu Berlin",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F 1 , macro-F 1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes. We show that reporting results of different weighting schemes better highlights strengths and weaknesses of a model.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Relation classification models are conventionally evaluated using only a single measure, e.g., micro-F 1 , macro-F 1 or AUC. In this work, we analyze weighting schemes, such as micro and macro, for imbalanced datasets. We introduce a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes. We show that reporting results of different weighting schemes better highlights strengths and weaknesses of a model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Relation classification (RC) models are typically compared with either micro-F 1 or macro-F 1 , often without discussing the measure's properties (see e.g. Zhang et al., 2017; Yao et al., 2019) . Each measure highlights different aspects of model performance (Sun et al., 2009) . However, using an inappropriate measure can lead to the preference of an unsuitable model (Branco et al., 2016) , e.g., tasks with an imbalanced or long-tailed class distribution. We argue that model evaluation should better reflect this, particularly as rare phenomena become more important in NLP (Rogers, 2021) .",
"cite_spans": [
{
"start": 156,
"end": 175,
"text": "Zhang et al., 2017;",
"ref_id": "BIBREF43"
},
{
"start": 176,
"end": 193,
"text": "Yao et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 259,
"end": 277,
"text": "(Sun et al., 2009)",
"ref_id": "BIBREF38"
},
{
"start": 370,
"end": 391,
"text": "(Branco et al., 2016)",
"ref_id": "BIBREF5"
},
{
"start": 579,
"end": 593,
"text": "(Rogers, 2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For instance, popular datasets for RC, such as TACRED (Zhang et al., 2017) , NYT (Riedel et al., 2010) , ChemProt (Kringelum et al., 2016) , Do-cRED (Yao et al., 2019) , and SemEval-2010 Task 8 (Hendrickx et al., 2010) , often exhibit a highly imbalanced label distribution (see Table 1 and, e.g., the TACRED class distribution 1 ). The main reasons are the natural data imbalance, i.e. the occurrence frequency of relation mentions in text, as well as the incompleteness of knowledge graphs like Freebase (Bollacker et al., 2008) used in distantly supervised RC. For example, 58% of the relations in the NYT dataset (Riedel et al., 2010) have fewer than 100 training instances (Han et al., 2018) , and the most frequent relation location/contains is assigned to 48.3% of the positive test instances. However, for applying RC to real-world problems, it is especially important to discover instances of relations that are not yet covered well in a given knowledge base. Table 1 lists statistics of the aforementioned RC datasets, including their perplexity and common evaluation measures. TACRED and the original version of NYT contain predominantly negative samples 2 . All datasets, except for undirectional SemEval, exhibit a large ratio between most frequent and least frequent positive class in the test set. The perplexity of test set distributions is also much lower than the relation count for all datasets except SemEval. Reporting only a single measure therefore cannot exhaustively capture model performance on these datasets, especially for the long tail of relation types. For example, Alt et al. (2019) show that on the NYT dataset, AUC scores and P-R-Curves of several state-of-the-art models are heavily skewed towards the two most frequent relation types location/contains and person/nationality. TACRED, ChemProt, DocRED and SemEval results are usually only reported in micro-F 1 , which does not consider class membership.",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF43"
},
{
"start": 81,
"end": 102,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF31"
},
{
"start": 114,
"end": 138,
"text": "(Kringelum et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 149,
"end": 167,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF41"
},
{
"start": 194,
"end": 218,
"text": "(Hendrickx et al., 2010)",
"ref_id": "BIBREF12"
},
{
"start": 506,
"end": 530,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 617,
"end": 638,
"text": "(Riedel et al., 2010)",
"ref_id": "BIBREF31"
},
{
"start": 678,
"end": 696,
"text": "(Han et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 1598,
"end": 1615,
"text": "Alt et al. (2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 279,
"end": 286,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 969,
"end": 976,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce a framework for weighting schemes of measures to address these evaluation deficits. We present and motivate two new weighting schemes that are in between the extremes of micro-and macro-weighting. We demonstrate these, micro-, class-weighted-and macro-F 1 on TACRED and SemEval with two popular models each. We show that more information about models can be inferred from our results and point out what further steps should be taken to improve evaluation in relation classification. Perplexity of the classes is given for the test set, with and without negative samples. This value would be equal to #Rel for a fully balanced dataset. Ratio is between the counts of the most and least frequent positive class of the test set. We also list the popular evaluation methods. The upper line for NYT indicates the original dataset by Riedel et al. (2010) , the lower line is the frequently used version by Hoffmann et al. (2011) . The upper SemEval entry considers the direction between the nominals, the lower one does not.",
"cite_spans": [
{
"start": 856,
"end": 876,
"text": "Riedel et al. (2010)",
"ref_id": "BIBREF31"
},
{
"start": 928,
"end": 950,
"text": "Hoffmann et al. (2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first give background on the F 1 -score and existing F 1 weighting schemes. We present our framework of weighting schemes. We introduce two new weighting schemes. Finally, we outline statistical tests.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "The F \u03b2 -score (Rijsbergen, 1979; Lewis and Gale, 1994) calculates a score in the interval [0, 1] through the formula",
"cite_spans": [
{
"start": 15,
"end": 33,
"text": "(Rijsbergen, 1979;",
"ref_id": null
},
{
"start": 34,
"end": 55,
"text": "Lewis and Gale, 1994)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F \u03b2 = (1 + \u03b2 2 ) \u2022 T P (1 + \u03b2 2 ) \u2022 T P + \u03b2 2 \u2022 F N + F P",
"eq_num": "(1)"
}
],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "with the true positives (TP), false negatives (FN) and false positives (FP) of a confusion matrix. This definition is identical to the weighted harmonic mean of precision and recall. The positive coefficient \u03b2 is used as a trade-off between the error types FN and FP. If there is no preference known or pre-determined, this coefficient is usually set to 1. In multi-class classification the confusion matrix can either be calculated once for the whole dataset, or separately for each class. The former method yields micro-F 1 . Micro weighting does not consider class membership for any test sample. If the predictions and labels of all classes are considered, micro-F 1 is equal to accuracy, as the denominator in Eq. 1 is twice the dataset. In RC, the TP of the negative class are usually not considered, in which case micro-F 1 is not equal to accuracy. For the F -score, micro is the only weighting where the impact of a sample on the score is not conditioned on the model performance on the rest of the class (Forman and Scholz, 2010). If the test set is considered to have a representative data distribution, the microweighted score is a frequentist evaluation of model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "There exist two other ways to calculate and combine F 1 -scores for a multi-class problem. First, multi-class F 1 -scores can be calculated for each class and then a weighted average class score is taken. Second, precision and recall scores for each class can be calculated and weighted, then the harmonic mean of weighted precision and weighted recall is taken. Opitz and Burst (2019) show that the first method is more robust and less favorable to biased classifiers. We use this method in our proposed framework.",
"cite_spans": [
{
"start": 363,
"end": 385,
"text": "Opitz and Burst (2019)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "(Class-)weighted-F 1 is similar to micro-F 1 . F 1scores are calculated for each class individually and then weighted by the class count. Thus, both schemes approximately weigh all samples equally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "Macro weighting gives an equal weight for each class with positive sample count regardless of the specific sample count. This gives information about model performance if class imbalance is not considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "In general, there is a correspondence between training loss and evaluation measure (Li et al., 2020) . One disadvantage of multiple weighting schemes is that each weighting scheme can be optimized for. To achieve a better score for a specific weighting, class weights could be set proportional to the weighting of the class during training. How-",
"cite_spans": [
{
"start": 83,
"end": 100,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2.1"
},
{
"text": "Micro -calculation over dataset, class membership is not considered Weighted n i weighting all classes by instance count, similar to micro Dodrans",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formula Focus",
"sec_num": null
},
{
"text": "n i 3/4 evaluating closer to generalization performance Entropy \u2212n i \u2022 log 2 (n i / j n j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formula Focus",
"sec_num": null
},
{
"text": "reducing impact of data distribution on evaluation Macro 1 equal weighting of all classes Table 2 : Weighting schemes for evaluation of multi-class classification. n i indicates the count of elements for class i and the Formula column shows the weight the class is assigned before normalization. The metrics are loosely ordered from top to bottom with the higher entries focusing more on instances and the lower entries focusing more on class membership. This usually corresponds to the model score, it is rare that models are better on classes with fewer samples. Methods in bold are proposed by us.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Formula Focus",
"sec_num": null
},
{
"text": "ever, we argue that model results should always be presented with multiple weightings for one dataset. Especially, when comparing different models all weightings should be reported for each model. This can clarify whether a model is good for all weightings or just micro or macro. Furthermore, with datasets that are currently evaluated with different weightings, it is easier to identify whether a model is specifically good for a dataset or for a weighting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formula Focus",
"sec_num": null
},
{
"text": "We discuss a framework that summarizes the rules we give to class-weighting schemes. Then we introduce two new class weighting schemes. All discussed weighting schemes can be found in Table 2 . They are independent of the measure that is used to calculate a score for each class.",
"cite_spans": [],
"ref_spans": [
{
"start": 184,
"end": 191,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "(Class-)weighted and macro weighting are the extremes of \"degressive proportionality\" 3 or \"allocation functions\" (S\u0142omczy\u0144ski and\u017byczkowski, 2012) . These are, e.g., used by the European Parliament to allocate seats to member nations depending on the population of the nation. They state that allocation should be monotonic increasing (see D1) and proportionally decreasing (see D2). To adopt this to a weighting scheme for multi-class evaluation, we add a normalizing desideratum that determines the sum of weights over all classes to be 1 (see D0).",
"cite_spans": [
{
"start": 114,
"end": 147,
"text": "(S\u0142omczy\u0144ski and\u017byczkowski, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "Let n i > 0 be the count of samples of class i and w i \u2265 0 the weight assigned to the score of class i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "3 https://eur-lex.europa.eu/ legal-content/EN/TXT/HTML/?uri=CELEX: 32013D0312&from=EN#d1e114-57-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "We have the following desiderata:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "i w i = 1 (D0) n i \u2265 n j \u21d2 w i \u2265 w j (D1) n i \u2265 n j \u21d2 w i n i \u2264 w j n j (D2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "Note that these desiderata do not restrict the scoring function that assigns scores s i to class i. The weighted evaluation score is then given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "i w i s i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework for Weighting Schemes",
"sec_num": "2.2"
},
{
"text": "Macro: Macro weighting is one extreme by setting equality on the weights of desideratum D1. It implies that we do not consider the instance counts per class, but treat all classes equally. (Class-)weighted: Class-weighted is the other extreme by setting equality on the fraction of weights and counts in desideratum D2. It implies that we do not consider class constituency but weight all samples equally. Dodrans: Cao et al. (2019) demonstrate that their balanced generalization error bound for binary classifiers in the separable case can be optimized by setting margins proportional to n i \u22121/4 . They use this derivation from a limited theoretical scenario to improve the performance of several classifiers on imbalanced multi-class datasets. A term proportional to n i \u22121/4 is added in the loss function. While this added term is not directly transferable, we propose adapting this as a multiplicative factor in weighting classes for multi-class evaluation:",
"cite_spans": [
{
"start": 415,
"end": 432,
"text": "Cao et al. (2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting Schemes",
"sec_num": "2.3"
},
{
"text": "w i \u221d n i \u22121/4 n i = n 3/4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting Schemes",
"sec_num": "2.3"
},
{
"text": "i . We coin this weighting dodrans (\"three-quarter\"). Entropy: We also want to provide a weighting scheme that takes into consideration how hard a class is to predict. To this end, we propose weighting classes proportional to their term in the Shannon entropy formula",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighting Schemes",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "H(X) = \u2212 i P (x i ) log(P (x i )) (2) w i \u221d P (x i ) log(P (x i )).",
"eq_num": "(3)"
}
],
"section": "Weighting Schemes",
"sec_num": "2.3"
},
{
"text": "We interpret P (x i ) for class i to be the probability of it appearing in the dataset, s.t. P (x i ) = n i / j n j . Thus, without normalization the model score is now the sum over all classes of the model performance on a class times the difficulty and frequency of the class. Note, that this weighting scheme does not fulfil desideratum D1, since it is decreasing for classes i with P (x i ) > e \u22121 . This is related to the fact that classes that are too large become easier to predict for a model, the model can just default to predicting this class. It can also be desirable that a class does not gain relative importance once it contains more than half of the dataset. For RC, this often has little consequence. If we include NA in the normalization, it is usually the largest class and other classes are below an e-th of the dataset. Table 2 shows an overview of the mentioned schemes. Figure 1 displays the weights that these schemes assign to the classes of the TACRED test set. The weighted scheme is proportional to class counts and produces the most imbalanced weights. Dodrans and entropy produce slightly more balanced weights and differ from weighted for the most frequent classes. Macro considers all classes equally, regardless of class count.",
"cite_spans": [],
"ref_spans": [
{
"start": 841,
"end": 848,
"text": "Table 2",
"ref_id": null
},
{
"start": 893,
"end": 901,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Weighting Schemes",
"sec_num": "2.3"
},
{
"text": "Currently, most RC works report a single score for each dataset. This can be the result from a single run or the median score from multiple runs. However, this does not allow to measure how large the difference between models is. Recently, analysis papers in NLP have recorded mean and standard deviation over multiple runs (Madhyastha and Jain, 2019; Zhou et al., 2020) , as this allows for statistical tests.",
"cite_spans": [
{
"start": 324,
"end": 351,
"text": "(Madhyastha and Jain, 2019;",
"ref_id": "BIBREF23"
},
{
"start": 352,
"end": 370,
"text": "Zhou et al., 2020)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Testing",
"sec_num": "2.4"
},
{
"text": "We first test for significance and report p-values. We employ Welch's t-test to test the hypothesis that the models have equal mean. Following Zhu et al. (2020), we also report Cohen's d effect size to determine how large the difference between models is for a specific measure. For two models with the Figure 1: TACRED relations and their respective weights under different weighting schemes. The lower x-axis denotes the normalized weight given to a relation for a scheme. The upper x-axis corresponds to the counts of the relations in the test set for the classweighted scheme. The y-axis denotes all positive relations. The negative NA class is not listed and has 12184 samples. The entropy and dodrans weighting scheme produce similar weights and are between weighted and macro weighting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Testing",
"sec_num": "2.4"
},
{
"text": "same number n > 1 of runs, Cohen's d is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Testing",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d = \u221a 2 \u00b5 1 \u2212 \u00b5 2 \u03c3 2 1 + \u03c3 2 2",
"eq_num": "(4)"
}
],
"section": "Statistical Testing",
"sec_num": "2.4"
},
{
"text": "with \u00b5 i and \u03c3 2 i being mean and variance of model i's scores. We do this, as two different models never perform exactly the same, i.e. significance just depends on the number of runs and we also want to score the difference between the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Testing",
"sec_num": "2.4"
},
{
"text": "We evaluate and compare three RC methods with our proposed measures on two datasets. We choose these methods, as RECENT (Lyu and Chen, 2021) and BERT EM (Baldini Soares et al., 2019) are based on vanilla fine-tuning of a pre-trained language model, with a classification head on top. PTR (Han et al., 2021) this way we can compare performance of the two paradigms for other weightings. RECENT proposes a model-agnostic paradigm that exploits entity types to narrow down the candidate relations. Given an entity-type combination, a separate classifier is trained on the restricted classes. Baldini Soares et al. (2019) compare various strategies that extract relation representation from Transformers and claim ENTITY START (i.e. insert entity markers at the start of two entity mentions) yields the best performance. PTR also takes entity types into consideration and constructs prompts composed of three subprompts, two corresponding to the fill-in of the entity types and one predicting the relation.",
"cite_spans": [
{
"start": 120,
"end": 140,
"text": "(Lyu and Chen, 2021)",
"ref_id": "BIBREF22"
},
{
"start": 153,
"end": 182,
"text": "(Baldini Soares et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 288,
"end": 306,
"text": "(Han et al., 2021)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "In our experiments we use RECENT GCN for RE-CENT, BERT EM with ENTITY START, and unreversed prompts for PTR. We use the official repositories for RECENT and PTR, we reimplement BERT EM 4 . We use the hyperparameters proposed in the original papers and conduct five runs for each model. Additional implementation and training details can be found in Appendices A and B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "The main focus is unearthing performance information about these methods that was previously obscured by single score measures. The number of weighting schemes does not influence the computational cost, as each score is determined through the predictions in a run and does not require specific tuning. 5 We acknowledge that each weighting scheme could be optimized for during training which gives additional importance to reporting multiple measures for each model. Table 3 shows results for TACRED. PTR significantly outperforms RECENT across all weighting schemes. The difference between the models is smallest for micro-F 1 and increases for all schemes that weigh classes more equally. For macro-F 1 the difference is starkest with effect size 24.2. Table 4 displays results for SemEval. BERT EM significantly outperforms PTR in the micro-F 1 measure and all other weightings except for macro-F 1 . All effect sizes are either large or huge, by far the largest effect size is between PTR and BERT EM regarding macro-F 1 though. The Sem-Eval test set contains a single sample of the Entity-Destination(e2,e1) class which is quite impactful for the macro-F 1 of the models but has negligible impact on all other weighting schemes. The scores from dodrans and entropy indicate that only if all classes are considered equally important the PTR model should be preferred. This indicates that either the PTR model learns almost regardless of class frequency or BERT EM has a class preference that is only discoverable with macro-F 1 .",
"cite_spans": [
{
"start": 302,
"end": 303,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 754,
"end": 761,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We demonstrate that evaluation on micro-F 1 does not give adequate information about model performance on long-tail classes. In Tables 3 and 4 we see that the model which performs better under micro-F 1 can either be significantly better or worse for classes with few samples. The weighted-F 1 produces similar results to micro-F 1 except for RECENT. Macro-F 1 on the other hand is very sensitive to model performance on single samples, e.g. the Entity-Destination(e2,e1) class in SemEval.",
"cite_spans": [],
"ref_spans": [
{
"start": 128,
"end": 143,
"text": "Tables 3 and 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "The scores of our proposed schemes are in between the existing measures and might be the best indicators for robust generalization performance. For all experiments, they produce similar results to each other. This could just be a coincidence of the datasets, and is also indicated by Figure 1 . Overall, it might be fair to say that one of the former and latter measures is enough. It would mean one measure that does weigh proportional to sample count (micro-or weighted-F 1 ), an intermediary measure (dodrans-F 1 or entropy-F 1 ) and macro-F 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "PTR performs better for macro-F 1 on both datasets. Its scores decrease less when classes are weighted more equally. This suggests that it is a better model for classes with low sample counts. Le Scao and Rush (2021) show that prompts can be worth hundreds of data points which would explain why the macro-and micro-F 1 scores are much closer together than for RECENT and BERT EM .",
"cite_spans": [
{
"start": 196,
"end": 216,
"text": "Scao and Rush (2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.1"
},
{
"text": "Chauhan et al. (2019) do a thorough evaluation of their model and notice the significantly different performance measured by micro and macro statistics due to the class imbalance, suggesting that the choice of evaluation measure is crucial. Huang and Wong (2020) further use the closeness between micro-and macro-F 1 scores to claim the stable performance of their model. Mille et al. (2021) point out that evaluating with a single score favors overfitting. They show different evaluation suites that can be created for a dataset. Bragg et al. (2021) address the disjoint evaluation settings across recent research threads in (few-shot) NLP and propose a unified evaluation benchmark which regulates dataset, sample size etc., but fail to take the evaluation measure into consideration, reporting only mean accuracy instead. Post (2018) criticises the inconsistency and under-specification in reporting scores. This problem is also prevalent in RC where the F 1 weighting scheme is often not specified. Zhang et al. (2020) show that bias from corpora persists for fine-tuned pre-trained language models. These models struggle with rare phenomena. For better performance debiasing with weighting is performed. S\u00f8gaard et al. (2021) argue against using random splits. They show that evaluating models with random splits is not a realistic setting but makes tasks easier by fixing the test data distribution to the train data distribution.",
"cite_spans": [
{
"start": 241,
"end": 262,
"text": "Huang and Wong (2020)",
"ref_id": "BIBREF15"
},
{
"start": 372,
"end": 391,
"text": "Mille et al. (2021)",
"ref_id": "BIBREF24"
},
{
"start": 531,
"end": 550,
"text": "Bragg et al. (2021)",
"ref_id": "BIBREF4"
},
{
"start": 825,
"end": 836,
"text": "Post (2018)",
"ref_id": "BIBREF28"
},
{
"start": 1003,
"end": 1022,
"text": "Zhang et al. (2020)",
"ref_id": "BIBREF42"
},
{
"start": 1209,
"end": 1230,
"text": "S\u00f8gaard et al. (2021)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "Long-tail evaluation is becoming more prominent in NLP research. Models in deep learning tend to show a gap in performance between frequent and infrequent phenomena (Rogers, 2021) . Models in NLP have been shown to perform badly on specific subsets of data (Zhang et al., 2020) . Sokolova and Lapalme (2009) analyze measures for multi-class classification and present invariances regarding the confusion matrix. G\u00f6sgens et al. (2021) also determine which class measures (including F 1 ) fulfil specific assumptions. Further evaluation can be based on this. Our weighting schemes for F 1 can be transferred to other measures that calculate a score for each class.",
"cite_spans": [
{
"start": 165,
"end": 179,
"text": "(Rogers, 2021)",
"ref_id": "BIBREF33"
},
{
"start": 257,
"end": 277,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF42"
},
{
"start": 280,
"end": 307,
"text": "Sokolova and Lapalme (2009)",
"ref_id": "BIBREF37"
},
{
"start": 412,
"end": 433,
"text": "G\u00f6sgens et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We suggest creating and using a bidimensional leaderboard like Kasai et al. (2021) where measures and models can be contributed. To this end, benchmarking of RC models could be done on a centralized site where a model or test set predictions are submitted and measures are calculated automatically through a script. For measures that modify weighting of classes and intra-class scoring, this does not require additional training computation.",
"cite_spans": [
{
"start": 63,
"end": 82,
"text": "Kasai et al. (2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
},
{
"text": "Due to the reproducibility crisis (Baker, 2016) , not all state-of-the-art scores can be replicated. Possible future work includes a comprehensive evaluation study of papers on leaderboards of RC tasks. This would enable an in-depth discussion of strength and weaknesses (including reproducibil-ity) of these models.",
"cite_spans": [
{
"start": 34,
"end": 47,
"text": "(Baker, 2016)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
},
{
"text": "The analysis we present can also be extended to other NLP tasks with imbalanced datasets, such as named entity recognition (Tjong Kim Sang and De Meulder, 2003) , part-of-speech tagging (Pradhan et al., 2013) and coreference resolution (Pradhan et al., 2012) .",
"cite_spans": [
{
"start": 123,
"end": 160,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF39"
},
{
"start": 186,
"end": 208,
"text": "(Pradhan et al., 2013)",
"ref_id": "BIBREF29"
},
{
"start": 236,
"end": 258,
"text": "(Pradhan et al., 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outlook",
"sec_num": "5"
},
{
"text": "We criticise the current practice of reporting a single score when evaluating imbalanced RC datasets. We propose a new framework to weight scores for multi-class evaluation of imbalanced datasets. We provide two new weighting schemes, dodrans and entropy, which are positioned between classweighted and macro. In our experiments, we show that model performance on both TACRED and SemEval, especially on the long-tail relations, is not adequately captured by a single score. Thus, we advocate the use of multiple weighing schemes when reporting model performance on imbalanced datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://nlp.stanford.edu/projects/ tacred/#stats",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Negative samples in RC means none of the dataset's relations hold. Depending on the dataset, this class is coined no-relation, NA or Other. We use negative class or NA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our reimplementation is available at https:// github.com/dfki-nlp/mtb-bert-em.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We provide a package to add these scores to a Scikit-learn(Pedregosa et al., 2011) classification report at https://github.com/DFKI-NLP/ weighting-schemes-report.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Nils Feldhus, Sebastian M\u00f6ller, Lisa Raithel, Robert Schwarzenberg and the anonymous reviewers for their feedback on the paper. This work was partially supported by the German Federal Ministry of Education and Research as part of the project CORA4NLP (01IW20010) and by the German Federal Ministry for Economic Affairs and Climate Action as part of the project PLASS (01MD19003E). Christoph Alt is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC 2002/1 \"Science of Intelligence\" -project number 390523135.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "To evaluate RECENT and PTR, we use the official code at https://github.com//Saintfe/ RECENT (last updated on 01.10.2021) and https: //github.com/thunlp/PTR (last updated on 20.11.2021). Since the official code of BERT EM is not available, we implement this method using the HuggingFace Transformers library (Wolf et al., 2020) and PyTorch (Paszke et al., 2019) , and make our code base available at https://github. com/dfki-nlp/mtb-bert-em. To make our results reproducible, we randomly generated seeds {9, 148, 378, 459, 687} and employed these for all models in their 5 runs.",
"cite_spans": [
{
"start": 307,
"end": 326,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 339,
"end": 360,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Implementation Details",
"sec_num": null
},
{
"text": "We consider GCN as the base model. Following the paper and the official code, we set the batch size to be 50, the optimizer to be SGD with learning rate 0.3, and the number of epochs to be 100. It takes a single RTX-A6000 GPU approximately 10 hours to complete all 5 runs on TACRED.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 RECENT",
"sec_num": null
},
{
"text": "We use the pre-trained language model (PLM) bert-large-uncased from the HuggingFace model hub and directly fine-tune the model for the RC task, without matching-the-blank pre-training. As the paper suggests, we set the batch size to be 64, the optimizer to be Adam with learning rate 3 \u2022 10 \u22125 , and the number of epochs to be 5. Additionally, we use the max sequence length of 512.It takes a single RTX-A6000 GPU 30 minutes to complete all 5 runs on SemEval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 BERT EM",
"sec_num": null
},
{
"text": "According to the paper and the official code base, we apply the same settings to evaluate both TACRED and SemEval: We use the PLM roberta-large and set the max sequence length to be 512, the batch size to be 64, the optimizer to be Adam with learning rate 3 \u2022 10 \u22125 , the weight decay to be 10 \u22122 , and the number of epochs to be 5. It takes 4 Quadro-P5000 GPUs 84 hours to complete 5 runs on TACRED, and it takes 8 Titan-V GPUs 9 hours on SemEval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 PTR",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fine-tuning pre-trained transformer language models to distantly supervised relation extraction",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Alt",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "H\u00fcbner",
"suffix": ""
},
{
"first": "Leonhard",
"middle": [],
"last": "Hennig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1388--1398",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1134"
]
},
"num": null,
"urls": [],
"raw_text": "Christoph Alt, Marc H\u00fcbner, and Leonhard Hennig. 2019. Fine-tuning pre-trained transformer language models to distantly supervised relation extraction. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1388- 1398, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reproducibility crisis",
"authors": [
{
"first": "Monya",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 2016,
"venue": "Nature",
"volume": "533",
"issue": "26",
"pages": "353--66",
"other_ids": {
"DOI": [
"10.1038/533452a"
]
},
"num": null,
"urls": [],
"raw_text": "Monya Baker. 2016. Reproducibility crisis. Nature, 533(26):353-66.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2895--2905",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1279"
]
},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learning. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2895- 2905, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Freebase: a collaboratively created graph database for structuring human knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM SIG-MOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {
"DOI": [
"10.1145/1376616.1376746"
]
},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collabo- ratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIG- MOD international conference on Management of data, pages 1247-1250.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Flex: Unifying evaluation for few-shot nlp",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Bragg",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
}
],
"year": 2021,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Bragg, Arman Cohan, Kyle Lo, and Iz Beltagy. 2021. Flex: Unifying evaluation for few-shot nlp. Advances in Neural Information Processing Systems, 34.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A survey of predictive modeling on imbalanced domains",
"authors": [
{
"first": "Paula",
"middle": [],
"last": "Branco",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Torgo",
"suffix": ""
},
{
"first": "Rita",
"middle": [
"P"
],
"last": "Ribeiro",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "49",
"issue": "2",
"pages": "1--50",
"other_ids": {
"DOI": [
"10.1145/2907070"
]
},
"num": null,
"urls": [],
"raw_text": "Paula Branco, Lu\u00eds Torgo, and Rita P. Ribeiro. 2016. A survey of predictive modeling on imbalanced do- mains. ACM Computing Surveys (CSUR), 49(2):1- 50.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning imbalanced datasets with label-distribution-aware margin loss",
"authors": [
{
"first": "Kaidi",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Gaidon",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Arechiga",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "1567--1578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. 2019. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in Neural Information Processing Systems, 32:1567- 1578.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "REflex: Flexible framework for relation extraction in multiple domains",
"authors": [
{
"first": "Geeticka",
"middle": [],
"last": "Chauhan",
"suffix": ""
},
{
"first": "B",
"middle": [
"A"
],
"last": "Matthew",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szolovits",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 18th BioNLP Workshop and Shared Task",
"volume": "",
"issue": "",
"pages": "30--47",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5004"
]
},
"num": null,
"urls": [],
"raw_text": "Geeticka Chauhan, Matthew B.A. McDermott, and Pe- ter Szolovits. 2019. REflex: Flexible framework for relation extraction in multiple domains. In Proceed- ings of the 18th BioNLP Workshop and Shared Task, pages 30-47, Florence, Italy. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Apples-toapples in cross-validation studies: pitfalls in classifier performance measurement",
"authors": [
{
"first": "George",
"middle": [],
"last": "Forman",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Scholz",
"suffix": ""
}
],
"year": 2010,
"venue": "Acm Sigkdd Explorations Newsletter",
"volume": "12",
"issue": "1",
"pages": "49--57",
"other_ids": {
"DOI": [
"10.1145/1882471.1882479"
]
},
"num": null,
"urls": [],
"raw_text": "George Forman and Martin Scholz. 2010. Apples-to- apples in cross-validation studies: pitfalls in classi- fier performance measurement. Acm Sigkdd Explo- rations Newsletter, 12(1):49-57.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Aleksey Tikhonov, and Liudmila Prokhorenkova. 2021. Good classification measures and how to find them",
"authors": [
{
"first": "Martijn",
"middle": [],
"last": "G\u00f6sgens",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Zhiyanov",
"suffix": ""
}
],
"year": null,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martijn G\u00f6sgens, Anton Zhiyanov, Aleksey Tikhonov, and Liudmila Prokhorenkova. 2021. Good classifi- cation measures and how to find them. Advances in Neural Information Processing Systems, 34.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ptr: Prompt tuning with rules for text classification. CoRR",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Weilin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.11259"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, and Maosong Sun. 2021. Ptr: Prompt tuning with rules for text classification. CoRR, arXiv:2105.11259.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ziyun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4803--4809",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1514"
]
},
"num": null,
"urls": [],
"raw_text": "Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. FewRel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4803-4809, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SemEval-2010 task 8: Multiway classification of semantic relations between pairs of nominals",
"authors": [
{
"first": "Iris",
"middle": [],
"last": "Hendrickx",
"suffix": ""
},
{
"first": "Su",
"middle": [
"Nam"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Zornitsa",
"middle": [],
"last": "Kozareva",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "\u00d3",
"middle": [],
"last": "Diarmuid",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
},
{
"first": "Lorenza",
"middle": [],
"last": "Pennacchiotti",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szpakowicz",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "33--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid \u00d3 S\u00e9aghdha, Sebastian Pad\u00f3, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. SemEval-2010 task 8: Multi- way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33-38. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541-550, Portland, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep embedding for relation extraction on insufficient labelled data",
"authors": [
{
"first": "Haojie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2020,
"venue": "2020 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {
"DOI": [
"10.1109/IJCNN48605.2020.9207554"
]
},
"num": null,
"urls": [],
"raw_text": "Haojie Huang and Raymond Wong. 2020. Deep embed- ding for relation extraction on insufficient labelled data. In 2020 International Joint Conference on Neu- ral Networks (IJCNN), pages 1-8.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bidimensional leaderboards: Generate and evaluate language hand in hand",
"authors": [
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Keisuke",
"middle": [],
"last": "Sakaguchi",
"suffix": ""
},
{
"first": "Lavinia",
"middle": [],
"last": "Ronan Le Bras",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Dunagan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"R"
],
"last": "Morrison",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/arXiv.2112.04139"
],
"arXiv": [
"arXiv:2112.04139"
]
},
"num": null,
"urls": [],
"raw_text": "Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander R Fab- bri, Yejin Choi, and Noah A Smith. 2021. Bidimen- sional leaderboards: Generate and evaluate language hand in hand. CoRR, arXiv:2112.04139.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Chemprot-3.0: A global chemical biology diseases mapping",
"authors": [
{
"first": "Jens",
"middle": [],
"last": "Kringelum",
"suffix": ""
},
{
"first": "Sonny",
"middle": [],
"last": "Kjaerulff",
"suffix": ""
},
{
"first": "S\u00f8ren",
"middle": [],
"last": "Brunak",
"suffix": ""
},
{
"first": "Ole",
"middle": [],
"last": "Lund",
"suffix": ""
},
{
"first": "Tudor",
"middle": [],
"last": "Oprea",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Taboureau",
"suffix": ""
}
],
"year": 2016,
"venue": "Database",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/bav123"
]
},
"num": null,
"urls": [],
"raw_text": "Jens Kringelum, Sonny Kjaerulff, S\u00f8ren Brunak, Ole Lund, Tudor Oprea, and Olivier Taboureau. 2016. Chemprot-3.0: A global chemical biology diseases mapping. Database, 2016:bav123.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How many data points is a prompt worth?",
"authors": [
{
"first": "Le",
"middle": [],
"last": "Teven",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "2627--2636",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.208"
]
},
"num": null,
"urls": [],
"raw_text": "Teven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2627-2636, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A sequential algorithm for training text classifiers",
"authors": [
{
"first": "D",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "William",
"middle": [
"A"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Lewis and William A. Gale. 1994. A se- quential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3-12.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dice loss for dataimbalanced NLP tasks",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yuxian",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Junjun",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.45"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Xiaofei Sun, Yuxian Meng, Junjun Liang, Fei Wu, and Jiwei Li. 2020. Dice loss for data- imbalanced NLP tasks. In Proceedings of the 58th",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "465--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 465-476, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Relation classification with entity type restriction",
"authors": [
{
"first": "Shengfei",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Huanhuan",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "390--395",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.34"
]
},
"num": null,
"urls": [],
"raw_text": "Shengfei Lyu and Huanhuan Chen. 2021. Relation clas- sification with entity type restriction. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 390-395, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "On model stability as a function of random seed",
"authors": [
{
"first": "Pranava",
"middle": [],
"last": "Madhyastha",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "929--939",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1087"
]
},
"num": null,
"urls": [],
"raw_text": "Pranava Madhyastha and Rishabh Jain. 2019. On model stability as a function of random seed. In Proceed- ings of the 23rd Conference on Computational Nat- ural Language Learning (CoNLL), pages 929-939, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic construction of evaluation suites for natural language generation datasets",
"authors": [
{
"first": "Simon",
"middle": [],
"last": "Mille",
"suffix": ""
},
{
"first": "Kaustubh",
"middle": [],
"last": "Dhole",
"suffix": ""
},
{
"first": "Saad",
"middle": [],
"last": "Mahamood",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Perez-Beltrachini",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Emiel Van Miltenburg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gehrmann",
"suffix": ""
}
],
"year": 2021,
"venue": "Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simon Mille, Kaustubh Dhole, Saad Mahamood, Laura Perez-Beltrachini, Varun Gangal, Mihir Kale, Emiel van Miltenburg, and Sebastian Gehrmann. 2021. Au- tomatic construction of evaluation suites for natural language generation datasets. In Thirty-fifth Con- ference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Macro f1 and macro f1. CoRR",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Opitz",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Burst",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/arXiv.1911.03347"
],
"arXiv": [
"arXiv:1911.03347"
]
},
"num": null,
"urls": [],
"raw_text": "Juri Opitz and Sebastian Burst. 2019. Macro f1 and macro f1. CoRR, arXiv:1911.03347.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "85",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Per- rot, and \u00c9douard Duchesnay. 2011. Scikit-learn: Ma- chine learning in python. Journal of Machine Learn- ing Research, 12(85):2825-2830.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Towards robust linguistic analysis using OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using OntoNotes. In Proceed- ings of the Seventeenth Conference on Computational Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases",
"volume": "",
"issue": "",
"pages": "148--163",
"other_ids": {
"DOI": [
"10.1007/978-3-642-15939-8_10"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 148-163. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Changing the world by changing the data",
"authors": [
{
"first": "Anna",
"middle": [
"Rogers"
],
"last": "",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "2182--2194",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.170"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers. 2021. Changing the world by changing the data. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 2182-2194, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "New effect size rules of thumb",
"authors": [
{
"first": "Shlomo",
"middle": [
"S"
],
"last": "Sawilowsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal of modern applied statistical methods",
"volume": "8",
"issue": "2",
"pages": "",
"other_ids": {
"DOI": [
"10.22237/jmasm/1257035100"
]
},
"num": null,
"urls": [],
"raw_text": "Shlomo S. Sawilowsky. 2009. New effect size rules of thumb. Journal of modern applied statistical meth- ods, 8(2):26.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Mathematical aspects of degressive proportionality",
"authors": [
{
"first": "Wojciech",
"middle": [],
"last": "S\u0142omczy\u0144ski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Karol\u017cyczkowski",
"suffix": ""
}
],
"year": 2012,
"venue": "Mathematical Social Sciences",
"volume": "63",
"issue": "2",
"pages": "94--101",
"other_ids": {
"DOI": [
"10.1016/j.mathsocsci.2011.12.002"
]
},
"num": null,
"urls": [],
"raw_text": "Wojciech S\u0142omczy\u0144ski and Karol\u017byczkowski. 2012. Mathematical aspects of degressive proportionality. Mathematical Social Sciences, 63(2):94-101.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "We need to talk about random splits",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ebert",
"suffix": ""
},
{
"first": "Jasmijn",
"middle": [],
"last": "Bastings",
"suffix": ""
},
{
"first": "Katja",
"middle": [],
"last": "Filippova",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "1823--1832",
"other_ids": {
"DOI": [
"10.18653/v1/2021.eacl-main.156"
]
},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova. 2021. We need to talk about random splits. In Proceedings of the 16th Conference of the European Chapter of the Association for Computa- tional Linguistics: Main Volume, pages 1823-1832.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A systematic analysis of performance measures for classification tasks. Information processing & management",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Sokolova",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "45",
"issue": "",
"pages": "427--437",
"other_ids": {
"DOI": [
"10.1016/j.ipm.2009.03.002"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Sokolova and Guy Lapalme. 2009. A system- atic analysis of performance measures for classifica- tion tasks. Information processing & management, 45(4):427-437.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Classification of imbalanced data: a review",
"authors": [
{
"first": "Yanmin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "K",
"middle": [
"C"
],
"last": "Andrew",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kamel",
"suffix": ""
}
],
"year": 2009,
"venue": "International Journal of Pattern Recognition and Artificial Intelligence",
"volume": "23",
"issue": "04",
"pages": "687--719",
"other_ids": {
"DOI": [
"10.1142/S0218001409007326"
]
},
"num": null,
"urls": [],
"raw_text": "Yanmin Sun, Andrew K. C. Wong, and Mohamed S. Kamel. 2009. Classification of imbalanced data: a review. International Journal of Pattern Recognition and Artificial Intelligence, 23(04):687-719.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142- 147.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "DocRED: A large-scale document-level relation extraction dataset",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lixin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "764--777",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1074"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pages 764-777, Florence, Italy. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting",
"authors": [
{
"first": "Guanhua",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Junqi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Conghui",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4134--4145",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.380"
]
},
"num": null,
"urls": [],
"raw_text": "Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Con- ghui Zhu, and Tiejun Zhao. 2020. Demographics should not be the reason of toxicity: Mitigating discrimination in text classifications with instance weighting. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4134-4145, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Position-aware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "35--45",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35- 45. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "The curse of performance instability in analysis datasets: Consequences, source, and suggestions",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8215--8228",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.659"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Zhou, Yixin Nie, Hao Tan, and Mohit Bansal. 2020. The curse of performance instability in analy- sis datasets: Consequences, source, and suggestions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8215-8228, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "NLPStatTest: A toolkit for comparing NLP system performance",
"authors": [
{
"first": "Haotian",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Denise",
"middle": [],
"last": "Mak",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Gioannini",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "40--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haotian Zhu, Denise Mak, Jesse Gioannini, and Fei Xia. 2020. NLPStatTest: A toolkit for comparing NLP system performance. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Asso- ciation for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations, pages 40-46, Suzhou, China. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Statistics for popular RC datasets. The number of relations, samples and percent of negative samples are for the whole dataset.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF3": {
"text": "is based on prompt-tuning. RECENT and PTR report similar micro-F 1 performance on TACRED, as do BERT EM and PTR on SemEval. In",
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Micro</td><td>Weighted</td><td>Dodrans</td><td>Entropy</td><td>Macro</td></tr><tr><td>RECENT</td><td>71.5\u00b10.4</td><td>67.8\u00b10.4</td><td>62.5\u00b10.4</td><td>63.6\u00b10.4</td><td>43.1\u00b10.6</td></tr><tr><td>PTR</td><td>72.5\u00b10.3</td><td>72.1\u00b10.5</td><td>69.8\u00b10.5</td><td>70.3\u00b10.5</td><td>60.3\u00b10.8</td></tr><tr><td>p-value Cohen's d</td><td>3 \u2022 10 \u22123 2.8</td><td>3 \u2022 10 \u22126 8.7</td><td>10 \u22128 14.8</td><td>2 \u2022 10 \u22128 13.5</td><td>2 \u2022 10 \u221210 24.2</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"text": "TACRED F 1 -scores with different weighting schemes. Positive scores indicate PTR performs better than RECENT for all weighting schemes. The difference is smallest for the micro and largest for the macro weighting scheme. All p-values are smaller than \u03b1 = 0.05. All effect sizes are huge (> 2.0) underSawilowsky (2009)'s rules of thumb.",
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Micro</td><td>Weighted</td><td>Dodrans</td><td>Entropy</td><td>Macro</td></tr><tr><td>BERT EM</td><td>89.1\u00b10.3</td><td>89.1\u00b10.3</td><td>88.7\u00b10.3</td><td>88.6\u00b10.3</td><td>82.7\u00b10.4</td></tr><tr><td>PTR</td><td>88.4\u00b10.3</td><td>88.3\u00b10.3</td><td>88.1\u00b10.3</td><td>88.0\u00b10.3</td><td>87.8\u00b10.5</td></tr><tr><td>p-value Cohen's d</td><td>0.005 -2.5</td><td>0.006 -2.4</td><td>0.023 -1.8</td><td>0.023 -1.8</td><td>7 \u2022 10 \u22128 11.5</td></tr></table>",
"num": null,
"html": null
},
"TABREF5": {
"text": "SemEval F 1 -scores with different weighting schemes. The directionality is of the relations is considered, s.t. there are 19 classes, the negative class is not included in evaluation. Negative scores indicate BERT EM performs better, positive scores indicate PTR performs better. All p-values are smaller than \u03b1 = 0.05. All absolute effect sizes are very large (> 1.2) or huge (> 2.0).",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}