ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2020.repl4nlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:58:31.024727Z"
},
"title": "A Metric Learning Approach to Misogyny Categorization",
"authors": [
{
"first": "Juan",
"middle": [
"M"
],
"last": "Coria",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "LIMSI"
}
},
"email": "[email protected]"
},
{
"first": "Sahar",
"middle": [],
"last": "Ghannay",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "LIMSI"
}
},
"email": "[email protected]"
},
{
"first": "Sophie",
"middle": [],
"last": "Rosset",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "LIMSI"
}
},
"email": "[email protected]"
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "Bredin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "LIMSI"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of automatic misogyny identification and categorization has not received as much attention as other natural language tasks have, even though it is crucial for identifying hate speech in social Internet interactions. In this work, we address this sentence classification task from a representation learning perspective, using both a bidirectional LSTM and BERT optimized with the following metric learning loss functions: contrastive loss, triplet loss, center loss, congenerous cosine loss and additive angular margin loss. We set new state-of-the-art for the task with our finetuned BERT, whose sentence embeddings can be compared with a simple cosine distance, and we release all our code as open source for easy reproducibility. Moreover, we find that almost every loss function performs equally well in this setting, matching the regular cross entropy loss.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of automatic misogyny identification and categorization has not received as much attention as other natural language tasks have, even though it is crucial for identifying hate speech in social Internet interactions. In this work, we address this sentence classification task from a representation learning perspective, using both a bidirectional LSTM and BERT optimized with the following metric learning loss functions: contrastive loss, triplet loss, center loss, congenerous cosine loss and additive angular margin loss. We set new state-of-the-art for the task with our finetuned BERT, whose sentence embeddings can be compared with a simple cosine distance, and we release all our code as open source for easy reproducibility. Moreover, we find that almost every loss function performs equally well in this setting, matching the regular cross entropy loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Whether it is at the word or at the sentence level, learning robust representations allows neural networks to consolidate knowledge that can later be transferred to other tasks and domains. Many approaches have dealt with this problem in different ways, for instance with CBOW or skip-gram from word2vec (Mikolov et al., 2013) for contextindependent word embeddings, or more recently with BERT's (Devlin et al., 2019) sentence embeddings and contextual word embeddings.",
"cite_spans": [
{
"start": 304,
"end": 326,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 396,
"end": 417,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to learn sentence representations, a neural encoder enc needs to learn a mapping from an initial representation x i to a target vector space. In a metric learning approach, the distances between each pair of sentence embeddings (enc(x i ), enc(x j )) should be low if classes y i = y j (intra-class compactness) and high if y i = y j (interclass separability). To achieve this objective, the angle \u03b8 ij separating a pair of embeddings (as depicted in Figure 1 ) can be used to redefine the model's loss function.",
"cite_spans": [],
"ref_spans": [
{
"start": 460,
"end": 468,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the domain of face recognition, many loss functions (Schroff et al., 2015; Wen et al., 2016; Liu et al., 2017; Wang et al., 2018; Deng et al., 2019) have been proposed to learn better face representations, motivated by high intra-class variability due to lighting, position or background. Other studies have experimented with these methods in different domains with similar characteristics, like speaker verification (Bredin, 2017; Chung et al., 2018; Yadav and Rai, 2018) , and even as an enhancement of BERT's sentence representations (Reimers and Gurevych, 2019) for semantic textual similarity. A recent study (Srivastava et al., 2019) has also focused on comparing these methods on face verification, showing that angular margin losses achieve superior performance.",
"cite_spans": [
{
"start": 55,
"end": 77,
"text": "(Schroff et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 78,
"end": 95,
"text": "Wen et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 96,
"end": 113,
"text": "Liu et al., 2017;",
"ref_id": "BIBREF12"
},
{
"start": 114,
"end": 132,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF16"
},
{
"start": 133,
"end": 151,
"text": "Deng et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 420,
"end": 434,
"text": "(Bredin, 2017;",
"ref_id": "BIBREF2"
},
{
"start": 435,
"end": 454,
"text": "Chung et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 455,
"end": 475,
"text": "Yadav and Rai, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 617,
"end": 642,
"text": "(Srivastava et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, the automatic misogyny identification (AMI) evaluation campaign (Fersini et al., 2018a) was proposed to address misogyny on tweets. Included tasks were identification (i.e. misogynous or not), categorization over five different misogyny types, and target identification (to an individual or a group). However, no participant has proposed a metric learning model. The best system (Ahluwalia et al., 2018 ) uses a bidirectional LSTM with word embeddings of size 100 for the identification task, and ensemble methods with feature engineering for category and target classification. They achieve a macro F1 score of 36.1 on the misogyny categorization part of sub-task B, which is the one we address as well. A different architecture (Caselli et al., 2018 ) uses a multi-layer character bidirectional LSTM for categorization, obtaining a macro F1 score of 14.1.",
"cite_spans": [
{
"start": 83,
"end": 106,
"text": "(Fersini et al., 2018a)",
"ref_id": "BIBREF7"
},
{
"start": 398,
"end": 421,
"text": "(Ahluwalia et al., 2018",
"ref_id": "BIBREF0"
},
{
"start": 749,
"end": 770,
"text": "(Caselli et al., 2018",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we focus on five metric learning losses for the task of misogyny categorization, using the AMI (Fersini et al., 2018a) dataset. Our hypothesis was that metric learning might reduce the natural intra-class variability within misogyny categories, making representations robust to writing styles, irony, insults, etc. The loss functions we experiment with are contrastive loss (Hadsell et al., 2006) , triplet loss (Schroff et al., 2015) , center loss (Wen et al., 2016) , congenerous cosine loss (Liu et al., 2017) and additive angular margin loss (Deng et al., 2019) , as well as cross entropy loss. We optimize these loss functions with two different architectures: a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019) , and we evaluate their performance using a simple K-nearest neighbors (KNN) classifier to better measure representation quality.",
"cite_spans": [
{
"start": 110,
"end": 133,
"text": "(Fersini et al., 2018a)",
"ref_id": "BIBREF7"
},
{
"start": 389,
"end": 411,
"text": "(Hadsell et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 427,
"end": 449,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF14"
},
{
"start": 464,
"end": 482,
"text": "(Wen et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 509,
"end": 527,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 561,
"end": 580,
"text": "(Deng et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 702,
"end": 736,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 746,
"end": 767,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions consist of new state-ofthe-art performance for the misogyny categorization task, as well as empirical evidence that these methods do not perform better than cross entropy loss on closed-set sentence classification. Moreover, our code is released as open source for easy reproducibility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we present the loss functions chosen for our study, which can be separated into contrastbased and classification-based, according to how they are computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Loss Functions",
"sec_num": "2"
},
{
"text": "The contrastive loss (Hadsell et al., 2006) uses pairs annotated as similar/dissimilar (also called positive/negative). It brings representations from similar examples closer together, while separating dissimilar ones explicitly:",
"cite_spans": [
{
"start": 21,
"end": 43,
"text": "(Hadsell et al., 2006)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "L = P + i=1 (D i ) 2 + P \u2212 i=1 max(m \u2212 D i , 0) 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "where P + is the number of similar pairs, P \u2212 the number of dissimilar pairs, D i = 1 \u2212 cos \u03b8 i the distance between embeddings of the ith pair, and m a margin. The triplet loss (Schroff et al., 2015) is calculated over triplets composed of a reference example known as the anchor, a positive and a negative, both the latter with respect to the anchor. Following the idea introduced by Gelly and Gauvain (2017), we define this loss using the sigmoid function:",
"cite_spans": [
{
"start": 178,
"end": 200,
"text": "(Schroff et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "L = T i=0 sigmoid(\u03b1 (cos \u03b8 n i \u2212 cos \u03b8 p i ))) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "where T is the number of triplets, \u03b1 a scaling hyperparameter, \u03b8 p i the angle separating the anchor and the positive embeddings, and \u03b8 n i the angle separating the anchor and the negative ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "Taking Figure 1 as an example, contrast-based losses encourage the cosine distance between embeddings i and j to be larger if y i = y j , and smaller if y i = y j . This is achieved a single pair at a time with contrastive loss, while triplet loss does it jointly using both the positive and negative inside the triplet.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Contrast-based losses",
"sec_num": "2.1"
},
{
"text": "These loss functions derive from the cross entropy loss, either by modifying how the classification layer output is calculated or working as a penalization term. The cross entropy loss is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L CE = \u2212 1 N N i=1 log softmax(\u03c3 i , y i )",
"eq_num": "(3)"
}
],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "where N is the number of training examples, \u03c3 i the output of the classification layer, and y i the class of the ith example. The congenerous cosine (CoCo) loss (Liu et al., 2017) interprets the weights w k of the classification layer as class centroids, learning to maximize the cosine similarity between a representation and its centroid. The classification layer output \u03c3 i is redefined as:",
"cite_spans": [
{
"start": 161,
"end": 179,
"text": "(Liu et al., 2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200k \u03c3 ik = \u03b1 \u2022 cos \u03b8 iw k",
"eq_num": "(4)"
}
],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "where \u03b8 iw k is the angle separating the ith representation and w k , and \u03b1 a scaling hyper-parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "The additive angular margin (AAM) loss (Deng et al., 2019) goes one step further adding a margin in angular space to penalize the distance between a representation and its centroid:",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "(Deng et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "\u2200k \u03c3 ik = \u03b1 \u2022 cos(\u03b8 iw k + \u03b4 ik m) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "where m is a margin, and \u03b4 ik = 1 if k = y i and 0 otherwise. Finally, the center loss (Wen et al., 2016) penalizes the cross entropy loss with the distance to jointly learned centroids c k external to the classification layer:",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Wen et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "L = L CE + \u03bb 2 N i=1 (1 \u2212 cos \u03b8 icy i ) 2 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "where \u03bb is a hyper-parameter controlling the effect of penalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "To see the effect of classification-based losses more intuitively, consider embeddings and centers in Figure 1 . If y i = k, then both congenerous cosine loss and center loss will penalize the loss value with the distance from embedding i to w k (or c k in the case of center loss), hence bringing all vectors from class k close to the centroid k. The additive angular margin loss follows the same principle, but penalizing further by artificially augmenting the distance of embedding i to w k with the angular margin.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Classification-based losses",
"sec_num": "2.2"
},
{
"text": "The term misogyny is defined as hatred towards women. Hate speech of this nature is unfortunately common in social Internet interactions, and current language models are generally unable to accurately detect and classify it. The AMI task and corpus were proposed in the context of the IberEval 2018 (Fersini et al., 2018b) and Evalita 2018 (Fersini et al., 2018a) evaluation campaigns, allowing researchers to train models focused specifically on misogyny. The corpus consists of an ensemble of tweets with three different types of annotations: misogyny (binary), misogyny category and target (active or passive).",
"cite_spans": [
{
"start": 299,
"end": 322,
"text": "(Fersini et al., 2018b)",
"ref_id": "BIBREF8"
},
{
"start": 340,
"end": 363,
"text": "(Fersini et al., 2018a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "We use the same dataset as in Fersini et al. (2018a) and we focus exclusively on misogyny categorization, using an additional class for non misogynous tweets. Our results are thus compared to the categorization part of sub-task B. An explanation of misogyny categories according to the definitions given in Fersini et al. (2018a) can be found in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 346,
"end": 353,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task",
"sec_num": "3"
},
{
"text": "Train Dev Test derailing 74 18 11 discredit 811 203 141 dominance 118 30 124 sexual harassment 282 70 44 stereotype 143 36 140 non misogynous 1,772 443 540 total 3,200 800 1,000 As the corpus does not provide a development set, one was constructed from the training set following the same class distribution. The final Train set is composed of 3200 tweets, and the Dev and Test sets of 800 and 1000 tweets respectively. Class distribution is described in detail in Table 1 . The task is evaluated using the macro F1 score.",
"cite_spans": [],
"ref_spans": [
{
"start": 6,
"end": 185,
"text": "Dev Test derailing 74 18 11 discredit 811 203 141 dominance 118 30 124 sexual harassment 282 70 44 stereotype 143 36 140 non misogynous 1,772 443 540 total",
"ref_id": "TABREF0"
},
{
"start": 489,
"end": 496,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Class",
"sec_num": null
},
{
"text": "As different losses rely on different hyperparameters, we perform a hyper-parameter search including learning rates, margins m, scalings \u03b1, and \u03bb. The values we have experimented with are shown in Table 3 . Each configuration is trained on Train for 60 epochs and validated using a KNN classifier on Dev. As we deal with a rather small dataset, the best configuration for each loss and each architecture is then trained and validated from scratch 10 times to reduce the effect of randomness. Reported results are the mean macro F1 score and standard deviation on Test over these 10 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental protocol",
"sec_num": "4.1"
},
{
"text": "In all experiments we use the cosine distance to compare embeddings, as congenerous cosine loss and additive angular margin loss can only be optimized in this way. Additionally, a linear classification layer is jointly trained with the sentence encoder when optimizing classification-based loss functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental protocol",
"sec_num": "4.1"
},
{
"text": "We experiment with two different encoder architectures. The first one is a one-layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997) with output size 768 (to match BERT) and word embeddings of size 300 obtained from a word2vec CBOW model (Mikolov et al., 2013) trained on 2billion-word Wikipedia dumps. The second one is",
"cite_spans": [
{
"start": 104,
"end": 138,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF11"
},
{
"start": 244,
"end": 266,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4.2"
},
{
"text": "Description Example derailing \"to justify women abuse, \"if rape is real why aren't more people rejecting male responsibility\" reporting it? just another feminist lie\" discredit \"slurring over women with \"this b*** is a s***\" no other larger intention\" dominance \"to assert the superiority of men \"#didyouknow the male brain is 3.4 times larger over women to highlight gender inequality\" than the female brain? #maledominance\" sexual \"sexual advances, harassment of \"come on box I show you my c*** darling\" harassment a sexual nature, etc.\" stereotype \"a widely held but fixed and \"these people are hysterical. it's like a commercial oversimplified image or idea of a woman\" for why men should never marry [. . . ]\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "Table 2: Misogyny categories as described by the corpus authors (Fersini et al., 2018a) along with examples found in the training set.",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "(Fersini et al., 2018a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "Parameter Values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "LR {10 \u22122 , 10 \u22123 , \u22124 , 10 \u22125 , 10 \u22126 } \u2022 {10 \u22124 , 10 \u22125 , 10 \u22126 , 10 \u22127 } \u2022 m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "{0.02, 0.05, 0.25, 0.5, 0.75} \u03b1 and \u03bb {0.01, 0.1, 1, 10, 100, 1000} the standard monolingual uncased BERT (Devlin et al., 2019) from the huggingface library (Wolf et al., 2019) pretrained on Wikipedia.",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 157,
"end": 176,
"text": "(Wolf et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "To obtain a sentence embedding from an encoder, we perform a max pooling over the hidden states of the last layer, leaving us with sentence embeddings of size 768 on both models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Category",
"sec_num": null
},
{
"text": "All sentences are pre-tokenized using the TweetTokenizer from the NLTK toolkit (Bird et al., 2009) in order to correctly deal with Twitterspecific tokens like hashtags, mentions, and even emojis. During this process we remove handles and URLs. When training BERT, we do a second pass of tokenization with BERT's pretrained tokenizer. We use a batch size of 32 sentences and RMSprop as optimizer, reducing the learning rate by half every 5 epochs of no improvement. The best configurations found during hyper-parameter search for each architecture and loss function are shown in Table 4 .",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "(Bird et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.3"
},
{
"text": "Our code is released as open source, available at github.com/juanmc2005/MetricAMI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation details",
"sec_num": "4.3"
},
{
"text": "We evaluate each model with the macro F1 score of a KNN classifier with K = 10 fit with all sentence embeddings from Train. However, given the high class imbalance, the a priori probability of a random embedding being closer to a non-misogynous embedding is higher than for a discredit one (see Table 1 ). To circumvent this issue, we penalize the vote for class k by the number of examples from k in Train. We believe this simple classifier to be a better measure for representation quality, as it relates to the separability and compactness properties that we expect from a metric learning model.",
"cite_spans": [],
"ref_spans": [
{
"start": 295,
"end": 302,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4.4"
},
{
"text": "The results are summarized in Figure 2 . With a fixed architecture, it is clear that all loss functions perform equally, with the exception of LSTM with contrastive and triplet loss. As the LSTM encoder is rather shallow (4.4M parameters) in comparison to BERT (110M parameters), it is possible that contrast-based losses need bigger models to perform competitively. The fact that almost all losses perform equally well shows that, contrary to what we thought, metric learning models perform no better than cross entropy, in contrast to other findings (Srivastava et al., 2019 ) on face verification. One possible explanation is that the AMI dataset may not contain enough examples or classes for these models to exploit. However, another factor might be responsible for this behavior. One of the key differences of AMI with respect to face verification is the closedset nature of the problem. An open-set task is evaluated with unseen classes, while a closed-set task is evaluated with unseen instances of the train- ing classes. It is possible that open-set verification tasks are more suitable for metric learning than closed-set tasks, meaning that the power of metric learning might in fact lie in generalizing to unseen classes rather than unseen class instances. The fact that verification tasks more closely resemble the training objective than exact class prediction could provide an explanation for this. On the other hand, our fine-tuned BERT outperforms the Evalita winner baseline (Ahluwalia et al., 2018) , setting new state-of-the-art for misogyny categorization, with the added benefit of having comparable embeddings with a simple cosine distance.",
"cite_spans": [
{
"start": 552,
"end": 576,
"text": "(Srivastava et al., 2019",
"ref_id": "BIBREF15"
},
{
"start": 1494,
"end": 1518,
"text": "(Ahluwalia et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 30,
"end": 38,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "As a final note, results in Table 4 suggest that congenerous cosine loss and center loss hyperparameters could be more sensitive to architecture changes than other losses, as they are the only ones whose best configurations differ from one architecture to the other. Perhaps not surprisingly, we also observe that additive angular margin loss works better with lower margins. This is consistent with the margin's role, serving as an upper bound for the distance between an embedding and its centroid, while the margin in contrastive loss serves as a lower bound for the distance between two negatives.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this work we have addressed the problem of misogyny categorization from a metric learning perspective, comparing the performance of sev- eral loss functions. We hypothesized that reducing intra-class variability in this way would be beneficial. However, we have shown that none of the considered losses can outperform the regular cross entropy on the task. Our results suggest that metric learning approaches might not be suited to closedset sentence classification tasks. Finally, our fine-tuned BERT sets new state-ofthe-art performance, with a macro F1 score of 40.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This work has been partially funded by the LIHLITH project (ANR-17-CHR2-0001-03), and supported by ERA-Net CHIST-ERA, and the \"Agence Nationale pour la Recherche\" (ANR, France). It has also been made possible thanks to the Saclay-IA computing platform.Finally, we would like to thank the reviewers for their useful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Detecting Hate Speech Against Women in English Tweets",
"authors": [
{
"first": "Resham",
"middle": [],
"last": "Ahluwalia",
"suffix": ""
},
{
"first": "Himani",
"middle": [],
"last": "Soni",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Callow",
"suffix": ""
}
],
"year": 2018,
"venue": "EVALITA Evaluation of NLP and Speech Tools for Italian",
"volume": "",
"issue": "",
"pages": "194--199",
"other_ids": {
"DOI": [
"10.4000/books.aaccademia.4698"
]
},
"num": null,
"urls": [],
"raw_text": "Resham Ahluwalia, Himani Soni, Edward Callow, An- derson Nascimento, and Martine De Cock. 2018. Detecting Hate Speech Against Women in English Tweets. In Tommaso Caselli, Nicole Novielli, Vi- viana Patti, and Paolo Rosso, editors, EVALITA Eval- uation of NLP and Speech Tools for Italian, pages 194-199. Accademia University Press.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Natural Language Processing with Python",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Ewan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "TristouNet: Triplet loss for speaker turn embedding",
"authors": [
{
"first": "Herve",
"middle": [],
"last": "Bredin",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5430--5434",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2017.7953194"
]
},
"num": null,
"urls": [],
"raw_text": "Herve Bredin. 2017. TristouNet: Triplet loss for speaker turn embedding. In 2017 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5430-5434, New Or- leans, LA. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Tweetaneuse@ AMI EVALITA2018: Character-based Models for the Automatic Misogyny Identification Task",
"authors": [
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Novielli",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Final Workshop",
"volume": "12",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommaso Caselli, Nicole Novielli, Viviana Patti, and Paolo Rosso. 2018. Tweetaneuse@ AMI EVALITA2018: Character-based Models for the Au- tomatic Misogyny Identification Task. In Proceed- ings of the Final Workshop, volume 12, page 13.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "VoxCeleb2: Deep Speaker Recognition",
"authors": [
{
"first": "Joon Son",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Arsha",
"middle": [],
"last": "Nagrani",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2018,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "1086--1090",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2018-1929"
]
},
"num": null,
"urls": [],
"raw_text": "Joon Son Chung, Arsha Nagrani, and Andrew Zisser- man. 2018. VoxCeleb2: Deep Speaker Recognition. In Interspeech, pages 1086-1090. ISCA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "ArcFace: Additive Angular Margin Loss for Deep Face Recognition",
"authors": [
{
"first": "Jiankang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Niannan",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Stefanos",
"middle": [],
"last": "Zafeiriou",
"suffix": ""
}
],
"year": 2019,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. ArcFace: Additive Angular Mar- gin Loss for Deep Face Recognition. In The IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI)",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Nozza",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2018,
"venue": "EVALITA Evaluation of NLP and Speech Tools for Italian",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {
"DOI": [
"10.4000/books.aaccademia.4497"
]
},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Debora Nozza, and Paolo Rosso. 2018a. Overview of the Evalita 2018 Task on Auto- matic Misogyny Identification (AMI). In Tommaso Caselli, Nicole Novielli, Viviana Patti, and Paolo Rosso, editors, EVALITA Evaluation of NLP and Speech Tools for Italian, pages 59-66. Accademia University Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of the Task on Automatic Misogyny Identification at IberEval",
"authors": [
{
"first": "Elisabetta",
"middle": [],
"last": "Fersini",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Anzovino",
"suffix": ""
}
],
"year": 2018,
"venue": "IberEval@ SEPLN",
"volume": "",
"issue": "",
"pages": "214--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabetta Fersini, Paolo Rosso, and Maria Anzovino. 2018b. Overview of the Task on Automatic Misog- yny Identification at IberEval 2018. In IberEval@ SEPLN, pages 214-228.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Spoken Language Identification Using LSTM-Based Angular Proximity",
"authors": [
{
"first": "G",
"middle": [],
"last": "Gelly",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvain",
"suffix": ""
}
],
"year": 2017,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "2566--2570",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2017-1334"
]
},
"num": null,
"urls": [],
"raw_text": "G. Gelly and J.L. Gauvain. 2017. Spoken Language Identification Using LSTM-Based Angular Proxim- ity. In Interspeech, pages 2566-2570. ISCA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dimensionality Reduction by Learning an Invariant Mapping",
"authors": [
{
"first": "R",
"middle": [],
"last": "Hadsell",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2006,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "2",
"issue": "",
"pages": "1735--1742",
"other_ids": {
"DOI": [
"10.1109/CVPR.2006.100"
]
},
"num": null,
"urls": [],
"raw_text": "R. Hadsell, S. Chopra, and Y. LeCun. 2006. Dimen- sionality Reduction by Learning an Invariant Map- ping. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 1735-1742, New York, NY, USA. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Long Short-Term Memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {
"DOI": [
"10.1162/neco.1997.9.8.1735"
]
},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Rethinking Feature Discrimination and Polymerization for Large-scale Recognition",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hongyang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaogang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Liu, Hongyang Li, and Xiaogang Wang. 2017. Rethinking Feature Discrimination and Polymer- ization for Large-scale Recognition. ArXiv, abs/1710.00870.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Efficient Estimation of Word Representations in Vector Space. ArXiv, abs/1301.3781. Nils Reimers and Iryna Gurevych",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jef- frey Dean. 2013. Efficient Estimation of Word Rep- resentations in Vector Space. ArXiv, abs/1301.3781. Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "FaceNet: A Unified Embedding for Face Recognition and Clustering",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Schroff",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Kalenichenko",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Philbin",
"suffix": ""
}
],
"year": 2015,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "815--823",
"other_ids": {
"DOI": [
"10.1109/CVPR.2015.7298682"
]
},
"num": null,
"urls": [],
"raw_text": "Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. FaceNet: A Unified Embedding for Face Recognition and Clustering. The IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 815-823.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Performance Comparison of Loss Functions for Deep Face Recognition",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Vaishnav",
"middle": [],
"last": "Murali",
"suffix": ""
},
{
"first": "Shiv Ram",
"middle": [],
"last": "Dubey",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Srivastava, Vaishnav Murali, and Shiv Ram Dubey. 2019. A Performance Comparison of Loss Functions for Deep Face Recognition. ArXiv, abs/1901.05903.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "CosFace: Large Margin Cosine Loss for Deep Face Recognition",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yitong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Dihong",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Jingchao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Di- hong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018. CosFace: Large Margin Cosine Loss for Deep Face Recognition. In The IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Discriminative Feature Learning Approach for Deep Face Recognition",
"authors": [
{
"first": "Yandong",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Kaipeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Qiao",
"suffix": ""
}
],
"year": 2016,
"venue": "Computer Vision -ECCV 2016",
"volume": "9911",
"issue": "",
"pages": "499--515",
"other_ids": {
"DOI": [
"10.1007/978-3-319-46478-7_31"
]
},
"num": null,
"urls": [],
"raw_text": "Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A Discriminative Feature Learning Approach for Deep Face Recognition. In Computer Vision -ECCV 2016, volume 9911, pages 499-515, Cham. Springer International Publishing.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace's Trans- formers: State-of-the-art Natural Language Process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning Discriminative Features for Speaker Identification and Verification",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Atul",
"middle": [],
"last": "Rai",
"suffix": ""
}
],
"year": 2018,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "2237--2241",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2018-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Yadav and Atul Rai. 2018. Learning Discrimi- native Features for Speaker Identification and Verifi- cation. In Interspeech, pages 2237-2241.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Depiction of embeddings in two dimensions. The dotted vector w k represents a centroid for some class k, while the other vectors are sentence embeddings. \u03b8 values are angles separating two vectors."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "\u22123 , m = 0.05, \u03b1 = 100 \u2022 LR = 10 \u22125 , m = 0.05, \u03b1 = 100 \u2022 Center LR = 10 \u22124 , \u03bb = 1000 \u2022 LR = 10 \u22125 , \u03bb = 0.1 \u2022 Congenerous LR = 10 \u22123 , \u03b1 = 10 \u2022 cosine LR = 10 \u22125 , \u03b1 = 100 \u2022 Contrastive LR = 10 \u22124 , m = 0.25 \u2022 LR = 10 \u22126 , m = 0.25 \u2022 Triplet LR = 10 \u22124 , \u03b1 = 1000 \u2022 LR = 10 \u22126 , \u03b1 = 1000 \u2022"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "F1 scores on Test for each architecture and loss function. Scores are calculated as the mean of 10 runs and standard deviation is shown as error bars. The baseline of the Evalita 2018 winner(Ahluwalia et al., 2018) is shown for reference."
},
"TABREF0": {
"num": null,
"text": "Number of sentences per class for each partition of the AMI dataset. Note that classes are greatly imbalanced.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF1": {
"num": null,
"text": "Values tested during initial hyper-parameter search, totaling 486 configurations. LR stands for learning rate, and m, \u03b1 and \u03bb are loss parameters (see Section 2). Values with \u2022 are LSTM only and values with \u2022 are BERT only.",
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF2": {
"num": null,
"text": "Best hyper-parameter configurations found per loss function. LR stands for learning rate, and m, \u03b1 and \u03bb are loss parameters (see Section 2). Rows with \u2022 correspond to LSTM and rows with \u2022 to BERT.",
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}