ACL-OCL / Base_JSON /prefixC /json /case /2021.case-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:13:59.598571Z"
},
"title": "DAAI at CASE 2021 Task 1: Transformer-based Multilingual Socio-political and Crisis Event Detection",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Mariam",
"middle": [],
"last": "Adedoyin-Olowe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Jagdev",
"middle": [],
"last": "Bhogal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Medhat Gaber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic socio-political and crisis event detection has been a challenge for natural language processing as well as social and political science communities, due to the diversity and nuance in such events and high accuracy requirements. In this paper, we propose an approach which can handle both document and cross-sentence level event detection in a multilingual setting using pretrained transformer models. Our approach became the winning solution in document level predictions and secured the 3 rd place in cross-sentence level predictions for the English language. We could also achieve competitive results for other languages to prove the effectiveness and universality of our approach.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic socio-political and crisis event detection has been a challenge for natural language processing as well as social and political science communities, due to the diversity and nuance in such events and high accuracy requirements. In this paper, we propose an approach which can handle both document and cross-sentence level event detection in a multilingual setting using pretrained transformer models. Our approach became the winning solution in document level predictions and secured the 3 rd place in cross-sentence level predictions for the English language. We could also achieve competitive results for other languages to prove the effectiveness and universality of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With technological advancements, today, we have access to a vast amount of data related to social and political factors. These data may contain information on a wide range of events such as political violence, environmental catastrophes and economic crises which are important to prevent or resolve conflicts, improve the quality of life and protect citizens. However, with the increasing data volume, manual efforts for event detection have become too expensive making the requirement of automated and accurate methods crucial (H\u00fcrriyetoglu et al., 2020) .",
"cite_spans": [
{
"start": 528,
"end": 555,
"text": "(H\u00fcrriyetoglu et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Considering this timely requirement, CASE 2021 Task 1: Multilingual protest news detection is designed (H\u00fcrriyetoglu et al., 2021) . This task is composed of four subtasks targeting different data levels. Subtask 1 is to identify documents which contain event information. Similarly, subtask 2 is to identify event described sentences. Subtask 3 targets the cross-sentence level to group sentences which describe the same event. The final subtask is to identify the event trigger and its arguments at the entity level. Since a news article can contain one or more events and a single event can be described together with some previous or relevant details, it is important to focus on different data levels to obtain more accurate and complete information.",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "(H\u00fcrriyetoglu et al., 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes our approach for document and cross-sentence level event detection including an experimental study. Our approach is mainly based on pretrained transformer models. We use improved model architectures, different learning strategies and unsupervised algorithms to make effective predictions. To facilitate the effortless generalisation across the languages, we do not use any language-specific processing or additional resources. Our submissions achieved the 1 st place in document level predictions and 3 rd place in crosssentence level predictions for the English language. Demonstrating the universality of our approach, we could obtain competitive results for other languages too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organised as follows. Section 2 describes the related work done in the field of socio-political event detection. Details of the task and datasets are provided in Section 3. Section 4 describes the proposed approaches. The experimental setup is described in Section 5 followed by results and evaluation in Section 6. Finally, Section 7 concludes the paper. Additionally, we provide our code to the community which will be freely available to everyone interested in working in this area using the same methodology 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In early work, the majority of event detection approaches were data-driven and knowledge-driven (Hogenboom et al., 2011) . Since the data-driven approaches are only based on the statistics of the underlying corpus, they missed the important semantical relationships. The knowledge-driven or rule-based approaches were proposed to tackle this limitation, but they highly rely on the targeted domains or languages (Danilova and Popova, 2014) .",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "(Hogenboom et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 412,
"end": 439,
"text": "(Danilova and Popova, 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Later, there was a more focus on traditional machine learning-based models (e.g. support vector machines, decision trees) including different feature extraction techniques (e.g. natural language parsing, word vectorisation) (Schrodt et al., 2014; Sonmez et al., 2016) . Also, there was a tendency to apply deep learning-based approaches (e.g. CNN, FFNN) too following their success in many information retrieval and natural language processing (NLP) tasks (Lee et al., 2017; Ahmad et al., 2020) . However, these approaches are less expandable to low-resource languages, due to the lack of training data to fine-tune the models.",
"cite_spans": [
{
"start": 224,
"end": 246,
"text": "(Schrodt et al., 2014;",
"ref_id": "BIBREF31"
},
{
"start": 247,
"end": 267,
"text": "Sonmez et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 456,
"end": 474,
"text": "(Lee et al., 2017;",
"ref_id": "BIBREF18"
},
{
"start": 475,
"end": 494,
"text": "Ahmad et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Targeting this major limitation, in this paper we propose an approach which is based on pretrained transformer models. Due to the usage of general knowledge available with the pretrained models and their multilingual capabilities, our approach can easily support event detection in multiple languages including low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "CASE 2021 Task 1: Multilingual protest news detection is composed of four subtasks targeting event information at document, sentence, cross-sentence and token levels (H\u00fcrriyetoglu et al., 2021) . Mainly the socio-political and crisis events which are in the scope of contentious politics and characterised by riots and social movements are focused. Among these subtasks, we participated in subtask 1 and subtask 3 which are further described below.",
"cite_spans": [
{
"start": 166,
"end": 193,
"text": "(H\u00fcrriyetoglu et al., 2021)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtasks and Data",
"sec_num": "3"
},
{
"text": "Subtask 1: Document Classification Subtask 1 is designed as a document classification task. Participants need to predict a binary label of '1' if the news article contains information about a past or ongoing event and '0' otherwise. To preserve the multilinguality of the task, four different languages English, Spanish, Portuguese and Hindi have been considered for data preparation. Comparatively, a high number of training instances were provided with English than Spanish and Portuguese. No training data were provided for the Hindi language. For final evaluations, test data were provided without labels. The data split sizes in each language are summarised in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 673,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Subtasks and Data",
"sec_num": "3"
},
{
"text": "The main motivation behind the proposed approaches for event document identification and event sentence coreference identification is the recent success gained by transformer-based architectures in various NLP and information retrieval tasks such as language detection (Jauhiainen et al., 2021) question answering (Yang et al., 2019) and offensive language detection (Husain and Uzuner, 2021; . Apart from providing strong results compared to RNN based architectures, transformer models like BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) provide pretrained language models that support more than 100 languages which is a huge benefit when it comes to multilingual research. The available models have been trained on general tasks like language modelling and then can be fine-tuned for downstream tasks like text classification (Sun et al., 2019) . Depending on the nature of the targeted subtask, we involved different transformer models along with different learning strategies to extract event information as mentioned below.",
"cite_spans": [
{
"start": 269,
"end": 294,
"text": "(Jauhiainen et al., 2021)",
"ref_id": "BIBREF17"
},
{
"start": 314,
"end": 333,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 367,
"end": 392,
"text": "(Husain and Uzuner, 2021;",
"ref_id": "BIBREF16"
},
{
"start": 497,
"end": 518,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 529,
"end": 551,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 841,
"end": 859,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Document classification can be considered as a sequence classification problem. According to recent literature, transformer architectures have shown promising results in this area (Ranasinghe et al., 2019b; Hettiarachchi and Ranasinghe, 2020) . Transformer models take an input of a sequence and output the representations of the sequence. The input sequence could contain one or two segments separated by a special token [SEP] . In this approach, we considered a whole document or a news article as a single sequence and no [SEP] token is used. As the first token of the sequence, another special token [CLS] is used and it returns a special embedding corresponding to the whole sequence which is used for text classification tasks (Sun et al., 2019) . A simple softmax classifier is added to the top of the transformer model to predict the probability of a class. The architecture of the transformer-based sequence classifier is shown in Figure 1 . Unfortunately, the majority of transformer models such as BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) fails to process documents with a higher sequence length than 512. This limitation is introduced due to the self-attention operation used by these architectures which scale quadratically with the sequence length (Beltagy et al., 2020) . Therefore, we specifically focused on improved transformer models targetting long documents: Longformer (Beltagy et al., 2020) and BigBird (Zaheer et al., 2020) . Longformer utilises an attention mechanism that scales linearly with sequence length and BigBird utilises a sparse attention mechanism to handle long sequences.",
"cite_spans": [
{
"start": 180,
"end": 206,
"text": "(Ranasinghe et al., 2019b;",
"ref_id": "BIBREF29"
},
{
"start": 207,
"end": 242,
"text": "Hettiarachchi and Ranasinghe, 2020)",
"ref_id": "BIBREF11"
},
{
"start": 422,
"end": 427,
"text": "[SEP]",
"ref_id": null
},
{
"start": 525,
"end": 530,
"text": "[SEP]",
"ref_id": null
},
{
"start": 733,
"end": 751,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 1014,
"end": 1035,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 1046,
"end": 1068,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 1281,
"end": 1303,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1410,
"end": 1432,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1445,
"end": 1466,
"text": "(Zaheer et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 940,
"end": 948,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Subtask1: Document Classification",
"sec_num": "4.1"
},
{
"text": "We applied a few preprocessing techniques to data before inserting them into the models. All the selected techniques are language-independent to support multilingual experiments. Analysing the datasets, there were documents with very low sequence length (< 5) and they were removed. Further, URLs were removed and repeating symbols more than three times (e.g. =====) were replaced by three occurrences (e.g. ===) because they are uninformative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing:",
"sec_num": null
},
{
"text": "Event Sentence Coreference Identification (ESCI) can be considered as a clustering problem. If a set of sentences are assigned to clusters based on their semantic similarity, each cluster will represent separate events. To perform clustering, each sentence needs to be mapped to an embedding which preserves its semantic details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask3: ESCI",
"sec_num": "4.2"
},
{
"text": "Different approaches were proposed to obtain sentence embeddings by previous research. Based on the word embedding models such as GloVe (Pennington et al., 2014) , the average of word embeddings over a sentence was used. Later, more improved architectures like InferSent (Conneau et al., 2017) which is based on a siamese BiLSTM network with max pooling, and Universal Sentence Encoder (Cer et al., 2018) which is based on a transformer network and augmented unsupervised learning were developed. However, with the improved performance on NLP tasks by transformers, there was a tendency to input sentences into models like BERT and get the output of the first token ([CLS]) or the average of output layer as a sentence embedding (May et al., 2019; Qiao et al., 2019) . These approaches were found as worse than average GloVe embeddings due to the architecture of BERT which was designed targeting classification or regression tasks (Reimers et al., 2019) .",
"cite_spans": [
{
"start": 136,
"end": 161,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF23"
},
{
"start": 271,
"end": 293,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 386,
"end": 404,
"text": "(Cer et al., 2018)",
"ref_id": "BIBREF4"
},
{
"start": 729,
"end": 747,
"text": "(May et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 748,
"end": 766,
"text": "Qiao et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 932,
"end": 954,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "4.2.1"
},
{
"text": "Considering these limitations and characteristics of transformer-based models, Reimers et al. (2019) proposed a new architecture named Sentence Transformer (STransformer), a modification to the transformers to derive semantically meaningful sentence embeddings. According to the experimental studies, STransformers outperformed average GloVe embeddings, specialised models like InferSent and Universal Sentence Encoder, and BERT embeddings (Reimers et al., 2019) . Considering these facts, we adopt STransformers to generate sentence embeddings in our approach.",
"cite_spans": [
{
"start": 79,
"end": 100,
"text": "Reimers et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 440,
"end": 462,
"text": "(Reimers et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "4.2.1"
},
{
"text": "STransformer creates a siamese network using transformer models like BERT to fine-tune the model to produce effective sentence embeddings. A pooling layer is added to the top of the transformer model to generate fixed-sized embeddings for sentences. The siamese network takes a sentence pair as the input and passes them through the network to generate embeddings (Ranasinghe et al., 2019a) . Then compute the similarity between embeddings using cosine similarity and compare the value with the gold score to fine-tune the network. The architecture of STransformer is shown in Figure 2 . Data Formatting: To facilitate the STransformer fine-tuning or training, we formatted given sentences into pairs and assigned the similarity of '1' if both sentences belong to the same cluster and '0' if not. During the pairing, the order of sentences is not considered. Thus, for n sentences, (n \u00d7 (n \u2212 1))/2 pairs were generated. For example, sentence pairs and labels generated for the data sample given in Listing 1 are shown in Table 3 . . Considering the availability of training data and recent successful applications, the pairwise prediction-based clustering approach is focused.",
"cite_spans": [
{
"start": 364,
"end": 390,
"text": "(Ranasinghe et al., 2019a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 577,
"end": 585,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1021,
"end": 1028,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "4.2.1"
},
{
"text": "Hierarchical Clustering: For the hierarchical clustering algorithm, we used Hierarchical Agglomerative Clustering (HAC). Each sentence is converted into embeddings to input to the clustering algorithm. HAC considers all data points as separate clusters at the beginning and then merge them based on cluster distance using a linkage method. The tree-like diagram generated by this process is known as a dendrogram and a particular distance threshold is used to cut it into clusters (Manning et al., 2008) . For the distance metric, cosine distance is used, because it proved to be effective for measurements in textual data (Mikolov et al., 2013; Antoniak and Mimno, 2018) and a variant of it is used with STransformer models. For the linkage method, single, complete and average schemes were considered for initial experiments and the average scheme was selected among them because it outperformed others. We picked the optimal distance threshold automatically using the training data. If training data is further split into training and validation sets to use with STransformers, only the validation set is used to pick the cluster threshold, because the rest of the data is known to the embedding generated model.",
"cite_spans": [
{
"start": 481,
"end": 503,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF19"
},
{
"start": 623,
"end": 645,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF21"
},
{
"start": 646,
"end": 671,
"text": "Antoniak and Mimno, 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings",
"sec_num": "4.2.1"
},
{
"text": "We used the pairwise prediction-based clustering algorithm proposed by \u00d6rs et al. 2020which became the winning solution of the ESCI task in the AESPEN-2020 workshop (H\u00fcrriyetoglu et al., 2020) . Originally this algorithm used the BERT model to predict whether a certain sentence pair belongs to the same event or not. In this research, we used STransformers to make those predictions except general transformers. Since a STransformer model is designed to obtain embeddings, to derive labels (i.e. '1' if the sentence pair belong to the same event and '0' if not) from them we used cosine similarity with a threshold. The optimal value computed during the model evaluation process is used as the threshold.",
"cite_spans": [
{
"start": 165,
"end": 192,
"text": "(H\u00fcrriyetoglu et al., 2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pairwise Prediction-based Clustering:",
"sec_num": null
},
{
"text": "This section describes the learning configurations, transformer models and hyper-parameters used for the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5"
},
{
"text": "We focused on different learning configurations depending on data and model availability, and multilingual setting. Considering the availability of data and models, we used the following configurations for the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "Pretrained (No Learning): Pretrained models are used without making any modifications to them to make the predictions. In this case, models pretrained using a similar objective to the target objective need to be selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "Fine-tuning: Under fine-tuning, we retrain an available model to a downstream task or the same task model already trained. This learning allows the model to be familiar with the targeted data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "From-scratch Learning: Models are built from scratch using the targeted data. This procedure helps to mitigate the unnecessary biases made by the data used to train available models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "Language Modelling (LM): In LM, we retrain the transformer model on the targeted dataset using the model's initial training objective before fine-tuning it for the downstream task. This step helps increase the model understanding of data (Hettiarachchi and Ranasinghe, 2020) .",
"cite_spans": [
{
"start": 238,
"end": 274,
"text": "(Hettiarachchi and Ranasinghe, 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "For multilingual data, the following configurations are considered to support both high-and lowresource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "Monolingual Learning: In monolingual learning, we build the model from the training data only from that particular language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Configurations",
"sec_num": "5.1"
},
{
"text": "In multilingual learning, we concatenate available training data from all languages and build a single model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Learning:",
"sec_num": null
},
{
"text": "In zero-shot learning, we use the models fine-tuned for the same task using training data from other language(s) to make the predictions. The multilingual and cross-lingual nature of the transformer models has provided the ability to do this ; Hettiarachchi and Ranasinghe, 2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot Learning:",
"sec_num": null
},
{
"text": "We used monolingual and multilingual general transformers as well as pretrained STransformers for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "5.2"
},
{
"text": "General Transformers: As monolingual models, we used transformer models built for each of the targeted languages. For English, BigBird (Zaheer et al., 2020), Longformer (Beltagy et al., 2020) and BERT English (Devlin et al., 2019) et al., 2020) models which are variants of the BERT model were considered. As multilingual models, BERT multilingual version and XLM-R (Conneau et al., 2020 ) models were used. Among these models, a higher sequence length than 512 is only supported by BigBird and Longformer models available for English. We used HuggingFace's Transformers library (Wolf et al., 2020) to obtain the models.",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(Beltagy et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 209,
"end": 230,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 366,
"end": 387,
"text": "(Conneau et al., 2020",
"ref_id": "BIBREF5"
},
{
"start": 579,
"end": 598,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "5.2"
},
{
"text": "Sentence Transformers: STransformers provide pretrained models for different tasks 2 . Among them, we selected the best-performed models trained for semantic textual similarity (STS) and duplicate question identification, because these areas are related to the same event prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "5.2"
},
{
"text": "We used a Nvidia Tesla K80 GPU to train the models. Each input dataset is divided into a training set and a validation set using a 0.9:0.1 split. We predominantly fine-tuned the learning rate and the number of epochs of the model manually to obtain the best results for the validation set. For document classification, we obtained 1e \u2212 5 as the best value for the learning rate and 3 as the best value for the number of epochs. The same learning rate was found as the best value for STransformers with epochs of 5. For the sequence length, different values have experimented with document classification and they are further discussed in Section 6.1. A fixed sequence length of 136 was used for ESCI considering its data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter Configurations",
"sec_num": "5.3"
},
{
"text": "To improve the performance of document classification, we used the majority-class self-ensemble approach mentioned in (Hettiarachchi and Ranasinghe, 2020) . During the training, we trained three models with different random seeds and considered the majority-class returned by the models as the final prediction.",
"cite_spans": [
{
"start": 118,
"end": 154,
"text": "(Hettiarachchi and Ranasinghe, 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter Configurations",
"sec_num": "5.3"
},
{
"text": "To train STransformers, we selected the online contrastive loss, an improved version of the con- trastive loss function. The contrastive loss function learns the parameters by reducing the distance between neighbours or semantically similar embeddings and increasing the distance between nonneighbours or semantically dissimilar embeddings (Hadsell et al., 2006) . The online version automatically detects the hard cases (i.e. negative pairs with a low distance than the largest distance of positive pairs and positive pairs with a high distance than the lowest distance of negative pairs) in a batch and calculates the loss only for them.",
"cite_spans": [
{
"start": 340,
"end": 362,
"text": "(Hadsell et al., 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyper-parameter Configurations",
"sec_num": "5.3"
},
{
"text": "In this section, we report the conducted experiments and their results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Evaluation",
"sec_num": "6"
},
{
"text": "Task organisers used Macro F1 as the evaluation metric for subtask 1. Since only the training data were released, we separated a dev set from each training dataset to evaluate our approach. Depending on the data size, 20% from English and 10% from other-language training data were separated as dev data. Initially, we analysed the performance of finetuned document classifiers for English using BERT and improved transformer models for long documents, along with varying sequence length. Considering the sequence length distribution in data, we picked the lengths of 256, 512 and 700 for these experiments. The obtained results are summarised in Table 4 . Even though we targeted large versions of the models (e.g. BigBird-roberta-large), due to the resource limitations, we had to use base versions (e.g. BigBird-roberta-base) for some experiments. According to the results, BERT models improve the F1 when we increase the sequence length. In contrast to it, both BigBird and Longformer models have higher F1 with low sequence lengths.",
"cite_spans": [],
"ref_spans": [
{
"start": 647,
"end": 654,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Subtask1: Document Classification",
"sec_num": "6.1"
},
{
"text": "For predictions in Spanish and Portuguese documents, we fine-tuned the models using both monolingual and multilingual learning approaches. Since transformers with the maximum sequence length of 512 are used, we fixed the sequence length to 512 based on the findings in English experiments. The obtained results and training configurations are summarised in Table 5 . For the high-resource language (i.e. English), multilingual learning returns a low F1 than monolingual learning. However, low-resource languages show a clear improvement in F1 with multilingual learning. Since there were no training data for the Hindi language, the best multilingual models were picked to apply the zero-shot learning approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Subtask1: Document Classification",
"sec_num": "6.1"
},
{
"text": "We report the results we obtained for test data in Table 6 . According to the results, our approach which used the BigBird model became the best system for the English language. For other languages, multilingual learning performed best. Among models, XLM-R outperformed the BERT-multilingual model. Compared to the best systems submitted, our approach has very competitive results for these languages too.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Subtask1: Document Classification",
"sec_num": "6.1"
},
{
"text": "To evaluate subtask 3 responses, organisers used CoNLL-2012 average score 3 (Pradhan et al., 2014). Similar to subtask 1, for evaluation purpose, we separated 20% from the English training dataset as dev data. There were no sufficient data in other languages for further splits. For the English language, we experimented with the clustering approaches using the embeddings generated by different STransformer models. Initially, we focused on pretrained models and their fine-tuned versions on task data. Later we built STransformers from scratch using general transformer models and further integrated LM too. The obtained results and corresponding model details are summarised in Table 7 . According to the results, STransformers build from scratch outperformed the pretrained and fine-tuned models. LM did not improve the results and it is possible when data is not enough for modelling. Among the clustering algorithms, HAC showed the best results.",
"cite_spans": [],
"ref_spans": [
{
"start": 681,
"end": 688,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Subtask3: ESCI",
"sec_num": "6.2"
},
{
"text": "We could not train any STransformer for other languages because the organisers provided a limited number of labelled instances for those languages. We used pretrained multilingual models and adhering to zero-shot learning, fine-tuned them using English data. Further English data were used to build STransformers from scratch too. All the evaluations were also done on English data and best-performing systems were chosen to make predictions for other languages. The obtained results are summarised in Table 8 . Similar to the English monolingual scenario, from-scratch multilingual models performed best.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 8",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Subtask3: ESCI",
"sec_num": "6.2"
},
{
"text": "We report the results for test data in Table 9 . According to the results, for all languages, we could obtain competitive results compared to the results of the best-submitted system. Since our approach can be easily extended to different languages with very few training instances, we believe the results are at a satisfactory level.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 9",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Subtask3: ESCI",
"sec_num": "6.2"
},
{
"text": "In this paper, we presented our approach for document and cross-sentence level subtasks of CASE 2021 Task 1: Multilingual protest news detection. We mainly used pretrained transformer models including their improved architectures for long document processing and sentence embedding generation. Further, different learning strategies: monolingual, multilingual and zero-shot and, classification and clustering approaches were involved. For document level predictions, our approach achieved the 1 st place for the English language while being within the top 4 solutions for other languages. For cross-sentence level predictions, we secured the As future work, we hope to further improve semantically meaningful sentence embedding generation using improved architectures, learning strategies and ensemble methods. Also, we would like to analyse the impact of different clustering approaches on cross-sentence level predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The GitHub repository is publicly available on https: //github.com/HHansi/EventMiner",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Sentence Transformer pretrained models are available on https://www.sbert.net/docs/pretrained_ models.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The implementation of the scorer is available on https: //github.com/LoicGrobol/scorch",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning for adverse event detection from web search",
"authors": [],
"year": null,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faizan Ahmad, Ahmed Abbasi, Brent Kitchens, Don- ald A Adjeroh, and Daniel Zeng. 2020. Deep learn- ing for adverse event detection from web search. IEEE Transactions on Knowledge and Data Engi- neering.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating the stability of embedding-based word similarities",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Antoniak",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "107--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Antoniak and David Mimno. 2018. Evaluating the stability of embedding-based word similarities. Transactions of the Association for Computational Linguistics, 6:107-119.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spanish pre-trained bert model and evaluation data",
"authors": [
{
"first": "Jos\u00e9",
"middle": [],
"last": "Canete",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Chaperon",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Fuentes",
"suffix": ""
},
{
"first": "Jorge",
"middle": [],
"last": "P\u00e9rez",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Canete, Gabriel Chaperon, Rodrigo Fuentes, and Jorge P\u00e9rez. 2020. Spanish pre-trained bert model and evaluation data. PML4DC at ICLR, 2020.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Universal sentence encoder",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sheng-Yi",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Hua",
"suffix": ""
},
{
"first": "Nicole",
"middle": [],
"last": "Limtiaco",
"suffix": ""
},
{
"first": "Rhomni",
"middle": [],
"last": "St John",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Guajardo-C\u00e9spedes",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Tar",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.11175"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-C\u00e9spedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1070"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Sociopolitical event extraction using a rule-based approach",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Danilova",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Popova",
"suffix": ""
}
],
"year": 2014,
"venue": "OTM Confederated International Conferences\" On the Move to Meaningful Internet Systems",
"volume": "",
"issue": "",
"pages": "537--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Danilova and Svetlana Popova. 2014. Socio- political event extraction using a rule-based ap- proach. In OTM Confederated International Con- ferences\" On the Move to Meaningful Internet Sys- tems\", pages 537-546. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dimensionality reduction by learning an invariant mapping",
"authors": [
{
"first": "Raia",
"middle": [],
"last": "Hadsell",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)",
"volume": "2",
"issue": "",
"pages": "1735--1742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Confer- ence on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735-1742. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Em-bed2detect: Temporally clustered embedded words for event detection in social media",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Mariam",
"middle": [],
"last": "Adedoyin-Olowe",
"suffix": ""
},
{
"first": "Jagdev",
"middle": [],
"last": "Bhogal",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [
"Medhat"
],
"last": "Gaber",
"suffix": ""
}
],
"year": 2021,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "1--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi, Mariam Adedoyin-Olowe, Jagdev Bhogal, and Mohamed Medhat Gaber. 2021. Em- bed2detect: Temporally clustered embedded words for event detection in social media. Machine Learn- ing, pages 1-39.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "InfoMiner at WNUT-2020 task 2: Transformerbased covid-19 informative tweet extraction",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Sixth Workshop on Noisy Usergenerated Text (W-NUT 2020)",
"volume": "",
"issue": "",
"pages": "359--365",
"other_ids": {
"DOI": [
"10.18653/v1/2020.wnut-1.49"
]
},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2020. InfoMiner at WNUT-2020 task 2: Transformer- based covid-19 informative tweet extraction. In Proceedings of the Sixth Workshop on Noisy User- generated Text (W-NUT 2020), pages 359-365, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "TransWiC at SemEval-2021 Task 2: Transformerbased Multilingual and Cross-lingual Word-in-Context Disambiguation",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of Se-mEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2021. TransWiC at SemEval-2021 Task 2: Transformer- based Multilingual and Cross-lingual Word-in- Context Disambiguation. In Proceedings of Se- mEval.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An overview of event extraction from text",
"authors": [
{
"first": "Frederik",
"middle": [],
"last": "Hogenboom",
"suffix": ""
},
{
"first": "Flavius",
"middle": [],
"last": "Frasincar",
"suffix": ""
},
{
"first": "Uzay",
"middle": [],
"last": "Kaymak",
"suffix": ""
},
{
"first": "Franciska",
"middle": [
"De"
],
"last": "Jong",
"suffix": ""
}
],
"year": 2011,
"venue": "DeRiVE@ ISWC",
"volume": "",
"issue": "",
"pages": "48--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederik Hogenboom, Flavius Frasincar, Uzay Kay- mak, and Franciska De Jong. 2011. An overview of event extraction from text. In DeRiVE@ ISWC, pages 48-57. Citeseer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multilingual protest news detectionshared task 1, case 2021",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Farhana Ferdousi Liza",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ratan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Osman Mutlu, Farhana Ferdousi Liza, Erdem Y\u00f6r\u00fck, Ritesh Kumar, and Shyam Ratan. 2021. Multilingual protest news detection - shared task 1, case 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automated extraction of socio-political events from news (aespen): Workshop and shared task report",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Vanni",
"middle": [],
"last": "Zavarella",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Safaya",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Vanni Zavarella, Hristo Tanev, Er- dem Y\u00f6r\u00fck, Ali Safaya, and Osman Mutlu. 2020. Automated extraction of socio-political events from news (aespen): Workshop and shared task report. In Proceedings of the Workshop on Automated Extrac- tion of Socio-political Events from News 2020, pages 1-6.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Leveraging offensive language for sarcasm and sentiment detection in arabic",
"authors": [
{
"first": "Fatemah",
"middle": [],
"last": "Husain",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Uzuner",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "364--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemah Husain and Ozlem Uzuner. 2021. Leveraging offensive language for sarcasm and sentiment detec- tion in arabic. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 364- 369.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comparing approaches to dravidian language identification",
"authors": [
{
"first": "Tommi",
"middle": [],
"last": "Jauhiainen",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 7th Workshop on NLP for Similar Languages, Varieties and Dialects",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tommi Jauhiainen, Tharindu Ranasinghe, and Marcos Zampieri. 2021. Comparing approaches to dravid- ian language identification. In Proceedings of the 7th Workshop on NLP for Similar Languages, Vari- eties and Dialects.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Adverse drug event detection in tweets with semi-supervised convolutional neural networks",
"authors": [
{
"first": "Kathy",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Aaditya",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Prakash",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Farri",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 26th international conference on world wide web",
"volume": "",
"issue": "",
"pages": "705--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathy Lee, Ashequl Qadir, Sadid A Hasan, Vivek Datla, Aaditya Prakash, Joey Liu, and Oladimeji Farri. 2017. Adverse drug event detection in tweets with semi-supervised convolutional neural networks. In Proceedings of the 26th international conference on world wide web, pages 705-714.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Introduction to information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D Manning, Prabhakar Raghavan, and Hin- rich Sch\u00fctze. 2008. Introduction to information re- trieval. Cambridge university press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On measuring social biases in sentence encoders",
"authors": [
{
"first": "Chandler",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shikha",
"middle": [],
"last": "Bordia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Rudinger",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "622--628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandler May, Alex Wang, Shikha Bordia, Samuel Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622-628.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Event clustering within news articles",
"authors": [
{
"first": "S\u00fcveyda",
"middle": [],
"last": "Faik Kerem \u00d6rs",
"suffix": ""
},
{
"first": "Reyyan",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faik Kerem \u00d6rs, S\u00fcveyda Yeniterzi, and Reyyan Yen- iterzi. 2020. Event clustering within news articles. In Proceedings of the Workshop on Automated Ex- traction of Socio-political Events from News 2020, pages 63-68.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Scoring coreference partitions of predicted mentions: A reference implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 30-35, Baltimore, Maryland. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Understanding the behaviors of bert in ranking",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Qiao",
"suffix": ""
},
{
"first": "Chenyan",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.07531"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan Qiao, Chenyan Xiong, Zhenghao Liu, and Zhiyuan Liu. 2019. Understanding the behaviors of bert in ranking. arXiv preprint arXiv:1904.07531.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Semantic textual similarity with Siamese neural networks",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1004--1011",
"other_ids": {
"DOI": [
"10.26615/978-954-452-056-4_116"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2019a. Semantic textual similarity with Siamese neural networks. In Proceedings of the In- ternational Conference on Recent Advances in Nat- ural Language Processing (RANLP 2019), pages 1004-1011, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "TransQuest: Translation quality estimation with cross-lingual transformers",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Constantin",
"middle": [],
"last": "Orasan",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5070--5081",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.445"
]
},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. TransQuest: Translation quality esti- mation with cross-lingual transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5070-5081, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "MUDES: Multilingual detection of offensive spans",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2021. MUDES: Multilingual detection of offensive spans. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies: Demonstrations, pages 144-152, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Marcos Zampieri, and Hansi Hettiarachchi. 2019b. BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification. In In Proceed- ings of the 11th annual meeting of the Forum for In- formation Retrieval Evaluation.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Nandan",
"middle": [],
"last": "Thakur",
"suffix": ""
},
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Iryna Gurevych, Nils Reimers, Iryna Gurevych, Nandan Thakur, Nils Reimers, Johannes Daxenberger, and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Three'sa charm?: Open event data coding with el: Diablo, petrarch, and the open event data alliance",
"authors": [
{
"first": "A",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Schrodt",
"suffix": ""
},
{
"first": "Muhammed",
"middle": [],
"last": "Beieler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Idris",
"suffix": ""
}
],
"year": 2014,
"venue": "ISA Annual Convention",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip A Schrodt, John Beieler, and Muhammed Idris. 2014. Three'sa charm?: Open event data coding with el: Diablo, petrarch, and the open event data alliance. In ISA Annual Convention. Citeseer.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Towards building a political protest database to explain changes in the welfare state",
"authors": [
{
"first": "Cagil",
"middle": [],
"last": "Sonmez",
"suffix": ""
},
{
"first": "Arzucan",
"middle": [],
"last": "\u00d6zg\u00fcr",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities",
"volume": "",
"issue": "",
"pages": "106--110",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cagil Sonmez, Arzucan \u00d6zg\u00fcr, and Erdem Y\u00f6r\u00fck. 2016. Towards building a political protest database to explain changes in the welfare state. In Proceed- ings of the 10th SIGHUM Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 106-110.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "BERTimbau: pretrained BERT models for Brazilian Portuguese",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Lotufo",
"suffix": ""
}
],
"year": 2020,
"venue": "9th Brazilian Conference on Intelligent Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23 (to appear).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In Chinese Computational Linguistics, pages 194- 206, Cham. Springer International Publishing.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "End-to-end open-domain question answering with BERTserini",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Aileen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luchen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4013"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72-77, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Big bird: Transformers for longer sequences",
"authors": [
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Guru",
"middle": [],
"last": "Guruganesh",
"suffix": ""
},
{
"first": "Avinava",
"middle": [],
"last": "Dubey",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Ainslie",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Santiago",
"middle": [],
"last": "Ontanon",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Ravula",
"suffix": ""
},
{
"first": "Qifan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.14062"
]
},
"num": null,
"urls": [],
"raw_text": "Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer se- quences. arXiv preprint arXiv:2007.14062.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Text Classification Architecture",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Siamese Sentence Transformer (STransformer) Architecture",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>Language</td><td colspan=\"2\">Train Test</td></tr><tr><td>English (en)</td><td>9324</td><td>2971</td></tr><tr><td>Spanish (es)</td><td>1000</td><td>250</td></tr><tr><td>Portuguese (pt)</td><td>1487</td><td>372</td></tr><tr><td>Hindi (hi)</td><td>-</td><td>268</td></tr></table>",
"html": null,
"type_str": "table",
"text": ".",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>: Data distribution over train and test sets in</td></tr><tr><td>subtask 1</td></tr><tr><td>Subtask 3: Event Sentence Coreference Identi-</td></tr><tr><td>fication (ESCI) Subtask 3 is targeted at the cross-</td></tr><tr><td>sentence level with the intention to identify the</td></tr><tr><td>coreference of sentences or sentences about the</td></tr><tr><td>same event. Given event-related sentences, the</td></tr><tr><td>targeted output is the clusters which represent sep-</td></tr><tr><td>arate events. As training data, per instance, a set</td></tr><tr><td>of sentences and corresponding event clusters were</td></tr><tr><td>provided as shown below:</td></tr><tr><td>{\"sentence_no\":[1,2,3],</td></tr><tr><td>\"sentences\":[</td></tr><tr><td>\"Maoist banners found 10th</td></tr><tr><td>April 2011 05:14 AM</td></tr><tr><td>KORAPUT : MAOIST banners</td></tr><tr><td>were found near the</td></tr><tr><td>District Primary Education</td></tr><tr><td>Project ( DPEP ) office</td></tr><tr><td>today in which the ultras</td></tr><tr><td>threatened to kill Shikhya</td></tr><tr><td>Sahayak candidates ,</td></tr><tr><td>outsiders to the district</td></tr><tr><td>, who have been selected</td></tr><tr><td>to join the service here</td></tr><tr><td>.\",</td></tr><tr><td>\"Maoists , in the banners ,</td></tr><tr><td>have also demanded release</td></tr><tr><td>of hardcore cadre Ghasi</td></tr><tr><td>who was arrested by police</td></tr><tr><td>earlier this week .\",</td></tr><tr><td>\"Similar banners were also</td></tr><tr><td>found between Sunki and</td></tr><tr><td>Ampavalli where Maoists</td></tr><tr><td>also blocked road by</td></tr><tr><td>felling trees .\"],</td></tr><tr><td>\"event_clusters\":[[1,2],[3]]}</td></tr><tr><td>Listing 1: Subtask 3 training data sample</td></tr><tr><td>Data from three different languages: English,</td></tr><tr><td>Spanish and Portuguese were provided. A few</td></tr><tr><td>training data instances are available with non-</td></tr><tr><td>English languages as summarised in Table 2. Simi-</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>: Data distribution over train and test sets in</td></tr><tr><td>subtask 3</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>: Sentence pairs and labels of data sample in</td></tr><tr><td>Listing 1</td></tr><tr><td>4.2.2 Clustering</td></tr><tr><td>As clustering methods, we focused on hierarchi-</td></tr><tr><td>cal clustering and the pairwise prediction-based</td></tr><tr><td>clustering approach proposed by \u00d6rs et al. (2020).</td></tr><tr><td>Hierarchical clustering is widely used with event</td></tr><tr><td>detection approaches over flat clustering because</td></tr><tr><td>flat clustering algorithms (e.g. K-means) require</td></tr><tr><td>the number of clusters as an input which is unpre-</td></tr><tr><td>dictable</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF5": {
"content": "<table><tr><td/><td>Model</td><td colspan=\"4\">Training Data Macro R Macro P Macro F1</td></tr><tr><td>English</td><td>BERT-multilingual-cased XLM-R-base</td><td>en+es+pt en+es+pt</td><td>0.8505 0.8280</td><td>0.8567 0.8727</td><td>0.8536 0.8476</td></tr><tr><td/><td>BETO-cased</td><td>es</td><td>0.6944</td><td>0.8681</td><td>0.7475 \u2021</td></tr><tr><td/><td>BERT-multilingual-cased</td><td>es</td><td>NT</td><td>NT</td><td>NT</td></tr><tr><td>Spanish</td><td>BERT-multilingual-cased</td><td>en+es+pt</td><td>0.7831</td><td>0.8111</td><td>0.7962 \u2021</td></tr><tr><td/><td>XLM-R-base</td><td>es</td><td>NT</td><td>NT</td><td>NT</td></tr><tr><td/><td>XLM-R-base</td><td>en+es+pt</td><td>0.7888</td><td>0.8530</td><td>0.8167 \u2021</td></tr><tr><td/><td>BERTimbau-large</td><td>pt</td><td>0.7672</td><td>0.8900</td><td>0.8126 \u2021</td></tr><tr><td/><td>BERT-multilingual-cased</td><td>pt</td><td>0.7595</td><td>0.8331</td><td>0.7896</td></tr><tr><td>Portuguese</td><td>BERT-multilingual-cased</td><td>en+es+pt</td><td>0.8384</td><td>0.8890</td><td>0.8611 \u2021</td></tr><tr><td/><td>XLM-R-base</td><td>pt</td><td>NT</td><td>NT</td><td>NT</td></tr><tr><td/><td>XLM-R-base</td><td>en+es+pt</td><td>0.7845</td><td>0.8449</td><td>0.8104 \u2021</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Results: Macro Recall (R), Precision (P) and F1 of document classification experiments for English using different sequence lengths and models. Best is in Bold and submitted systems are marked with \u2021.",
"num": null
},
"TABREF6": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results of multilingual document classification experiments. Training Data column summarises the language(s) of used datasets to train models. Due to training data limitations, a few models were found to be not trainable and they are indicated with NT. Best is in Bold and submitted systems are marked with \u2021.",
"num": null
},
"TABREF8": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Document classification results for test data",
"num": null
},
"TABREF10": {
"content": "<table><tr><td/><td>Base Model</td><td>STransformer</td><td>Clustering</td><td>CoNLL Average Score</td></tr><tr><td>Pretrained</td><td colspan=\"2\">DistilBERT-base-uncased quora-distilbert-multilingual</td><td>HAC</td><td>0.8360</td></tr><tr><td>Fine-tune</td><td colspan=\"3\">DistilBERT-base-uncased quora-distilbert-multilingual DistilBERT-base-uncased quora-distilbert-multilingual (\u00d6rs et al., 2020) HAC</td><td>0.8423 \u2021 0.8362</td></tr><tr><td/><td colspan=\"2\">BERT-multilingual-cased -</td><td>HAC</td><td>0.8464 \u2021</td></tr><tr><td>From-scratch</td><td colspan=\"2\">BERT-multilingual-cased -XLM-R-large -</td><td>(\u00d6rs et al., 2020) HAC</td><td>0.8414 0.8360</td></tr><tr><td/><td>XLM-R-large</td><td>-</td><td>(\u00d6rs et al., 2020)</td><td>0.8350</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Results of ESCI for English along with different strategies experimented. Best is in Bold and submitted systems are marked with \u2021.",
"num": null
},
"TABREF11": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results of ESCI for English using multilingual models. Best is in Bold and submitted systems are marked with \u2021.",
"num": null
},
"TABREF12": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "ESCI results for test data 3 rd place for the English language with competitive results for other languages. Despite that, our approach can support multiple languages with low or no training resources.",
"num": null
}
}
}
}