ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.49.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:31.076665Z"
},
"title": "InfoMiner at WNUT-2020 Task 2: Transformer-based Covid-19 Informative Tweet Extraction",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Birmingham City University",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": "",
"affiliation": {
"laboratory": "Research Group in Computational Linguistics",
"institution": "University of Wolverhampton",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Identifying informative tweets is an important step when building information extraction systems based on social media. WNUT-2020 Task 2 was organised to recognise informative tweets from noise tweets. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves 10 th place in the final rankings scoring 0.9004 F1 score for the test set.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Identifying informative tweets is an important step when building information extraction systems based on social media. WNUT-2020 Task 2 was organised to recognise informative tweets from noise tweets. In this paper, we present our approach to tackle the task objective using transformers. Overall, our approach achieves 10 th place in the final rankings scoring 0.9004 F1 score for the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "By 31st August 2020, coronavirus COVID-19 is affecting 213 countries around the world infecting more than 25 million people and killing more than 800,000. Recently, much attention has been given to build monitoring systems to track the outbreaks of the virus. However, due to the fact that most of the official news sources update the outbreak information only once or twice a day, these monitoring tools have begun to use social media as the medium to get information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is a massive amount of data on social networks, e.g. about 4 millions of COVID-19 English tweets daily on the Twitter platform. However, majority of these tweets are uninformative. Thus it is important to be able to select the informative ones for downstream applications. Since the manual approaches to identify the informative tweets require significant human efforts, an automated technique to identify the informative tweets will be invaluable to the community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The objective of this shared task is to automatically identify whether a COVID-19 English tweet is informative or not. Such informative Tweets provide information about recovered, suspected, confirmed and death cases as well as location or travel history of the cases. The participants of the shared task were required to provide predictions for the test set provided by the organisers whether a tweet is informative or not. Our team used recently released transformers to tackle the problem. Despite achieving 10 th place out of 55 participants and getting high evaluation score, our approach is simple and efficient. In this paper we mainly present our approach that we used in this task. We also provide important resources to the community: the code, and the trained classification models will be freely available to everyone interested in working on identifying informative tweets using the same methodology 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the last few years, there have been several studies published on the application of computational methods in order to identify informative contents from tweets. Most of the earlier methods were based on traditional machine learning models like logistic regression and support vector machines with heavy feature engineering. Castillo et al. (2011) investigate tweet newsworthiness classification using features representing the message, user, topic and the propagation of messages. Others use features based on social influence, information propagation, syntactic and combinations of local linguistic features as well as user history and user opinion to select informative tweets (Inouye and Kalita, 2011; Yang et al., 2011; Chua and Asur, 2013) . Due to the fact that training set preparation is difficult when it comes informative tweet identification, several studies suggested unsupervised methods. Sankaranarayanan et al. (2009) built a news processing system, called TwitterStand using an unsupervised approach to classify tweets collected from pre-determined users who frequently post news about events. Even though these traditional approaches have provided good results, they 1 The GitHub repository is publicly available on https://github.com/hhansi/ informative-tweet-identification are no longer the state of the art.",
"cite_spans": [
{
"start": 327,
"end": 349,
"text": "Castillo et al. (2011)",
"ref_id": "BIBREF1"
},
{
"start": 682,
"end": 707,
"text": "(Inouye and Kalita, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 708,
"end": 726,
"text": "Yang et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 727,
"end": 747,
"text": "Chua and Asur, 2013)",
"ref_id": "BIBREF2"
},
{
"start": 905,
"end": 935,
"text": "Sankaranarayanan et al. (2009)",
"ref_id": "BIBREF19"
},
{
"start": 1187,
"end": 1188,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Considering the recent research, there was a tendency to use deep learning-based methods to identify informative tweets since they performed better than traditional machine learning-based methods. To mention few, ALRashdi and O'Keefe (2019) suggested an approach based on Bidirectional Long Short-Term Memory (Bi-LSTM) models trained using word embeddings. Another research proposed a deep multi-modal neural network based on images and text in tweets to recognise informative tweets (Kumar et al., 2020) . Among the different neural network models available, transformer models received a huge success in the area of natural language processing (NLP) recently. Since the release of BERT (Devlin et al., 2019) , transformer models gained a wide attention of the community and they were successfully applied for wide range of tasks including tweet classification tasks such as offensive tweet identification and topic identification (Y\u00fcksel et al., 2019). But we could not find any previous work on transformers for informative tweet classification. Hence, we decided to use transformer for our approach and this study will be important to the community.",
"cite_spans": [
{
"start": 484,
"end": 504,
"text": "(Kumar et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 688,
"end": 709,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets (Nguyen et al., 2020) is to develop a system which can automatically categorise the tweets related to coronavirus as informative or not. A data set of 10K tweets which are labelled as informative and uninformative is released to conduct this task. The class distributions of the data set splits are mentioned in Table 1 ",
"cite_spans": [
{
"start": 48,
"end": 93,
"text": "COVID-19 English Tweets (Nguyen et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 384,
"end": 391,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Task Description and Data Set",
"sec_num": "3"
},
{
"text": "The motivation behind our methodology is the recent success that the transformers had in wide range of NLP tasks like language generation (Devlin et al., 2019) , sequence classification Ranasinghe and Zampieri, 2020) , word similarity , named entity recognition (Liang et al., 2020) and question and answering (Yang et al., 2019a) . The main idea of the methodology is that we train a classification model with several transformer models in-order to identify informative tweets.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 186,
"end": 216,
"text": "Ranasinghe and Zampieri, 2020)",
"ref_id": "BIBREF17"
},
{
"start": 262,
"end": 282,
"text": "(Liang et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 310,
"end": 330,
"text": "(Yang et al., 2019a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Predicting whether a certain tweet is informative or not can be considered as a sequence classification task. Since the transformer architectures have shown promising results in sequence classification tasks Ranasinghe and Zampieri, 2020) , the basis for our methodology was transformers.",
"cite_spans": [
{
"start": 208,
"end": 238,
"text": "Ranasinghe and Zampieri, 2020)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers for Text Classification",
"sec_num": "4.1"
},
{
"text": "Transformer architectures have been trained on general tasks like language modelling and then can be fine-tuned for classification tasks. (Sun et al., 2019) Transformer models take an input of a sequence and outputs the representations of the sequence. There can be one or two segments in a sequence which are separated by a special token [SEP] . In this approach we considered a tweet as a sequence and no [SEP] token is used. Another special token [CLS] is used as the first token of the sequence which contains a special classification embedding. For text classification tasks, transformer models take the final hidden state h of the [CLS] token as the representation of the whole sequence (Sun et al., 2019) . A simple softmax classifier is added to the top of the transformer model to predict the probability of a class c as shown in Equation 1 where W is the task-specific parameter matrix. The architecture of transformer-based sequence classifier is shown in Figure 1 . ",
"cite_spans": [
{
"start": 138,
"end": 156,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 339,
"end": 344,
"text": "[SEP]",
"ref_id": null
},
{
"start": 693,
"end": 711,
"text": "(Sun et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 967,
"end": 975,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Transformers for Text Classification",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(c|h) = sof tmax(W h)",
"eq_num": "(1)"
}
],
"section": "Transformers for Text Classification",
"sec_num": "4.1"
},
{
"text": "We used several pre-trained transformer models in this task. These models were used mainly considering the popularity of them (e.g. BERT (Devlin et al., 2019) , XLNet (Yang et al., 2019b) , RoBERTa (Liu et al., 2019) , ELECTRA (Clark et al., 2020) , AL-BERT (Lan et al., 2020) ) and relatedness to the task (e.g. COVID-Twitter-BERT (CT-BERT) (M\u00fcller et al., 2020) and BERTweet (Dat Quoc Nguyen and Nguyen, 2020)).",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 167,
"end": 187,
"text": "(Yang et al., 2019b)",
"ref_id": "BIBREF24"
},
{
"start": 198,
"end": 216,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 227,
"end": 247,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 258,
"end": 276,
"text": "(Lan et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 342,
"end": 363,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "4.2"
},
{
"text": "BERT (Devlin et al., 2019) was the first transformer model that gained a wide attention of the NLP community. It proposes a masked language modelling (MLM) objective, where some of the tokens of a input sequence are randomly masked, and the objective is to predict these masked positions taking the corrupted sequence as input. As we explained before BERT uses special tokens to obtain a single contiguous sequence for each input sequence. Specifically, the first token is always a special classification token [CLS] which is used for sentence-level tasks.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 511,
"end": 516,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "4.2"
},
{
"text": "RoBERTa (Liu et al., 2019) , ELECTRA (Clark et al., 2020) and ALBERT (Lan et al., 2020 ) can all be considered as variants of BERT. They make a few changes to the BERT model and achieves substantial improvements in some NLP tasks (Liu et al., 2019; Clark et al., 2020; Lan et al., 2020) . XLNet on the other hand takes a different approach to BERT (Yang et al., 2019b) . XLNet proposes a new auto-regressive method based on permutation language modelling (PLM) (Uria et al., 2016) without introducing any new symbols such as [MASK] in BERT. Also there are significant changes in the XLNet architecture like adopting two-stream selfattention and Transformer-XL (Dai et al., 2019) . Due to this XLNet outperforms BERT in multiple NLP downstream tasks (Yang et al., 2019b) .",
"cite_spans": [
{
"start": 8,
"end": 26,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 37,
"end": 57,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 69,
"end": 86,
"text": "(Lan et al., 2020",
"ref_id": "BIBREF11"
},
{
"start": 230,
"end": 248,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 249,
"end": 268,
"text": "Clark et al., 2020;",
"ref_id": "BIBREF3"
},
{
"start": 269,
"end": 286,
"text": "Lan et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 348,
"end": 368,
"text": "(Yang et al., 2019b)",
"ref_id": "BIBREF24"
},
{
"start": 461,
"end": 480,
"text": "(Uria et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 525,
"end": 531,
"text": "[MASK]",
"ref_id": null
},
{
"start": 660,
"end": 678,
"text": "(Dai et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 749,
"end": 769,
"text": "(Yang et al., 2019b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "4.2"
},
{
"text": "We also used two transformer models based on Twitter; CT-BERT and BERTweet. The CT-BERT model is based on the BERT-LARGE model and trained on a corpus of 160M tweets about the coronavirus (M\u00fcller et al., 2020) while the BERTweet model is based on BERT-BASE model and trained on general tweets (Dat Quoc Nguyen and Nguyen, 2020).",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers",
"sec_num": "4.2"
},
{
"text": "Few general data preprocessing techniques were employed with InfoMiner to preserve the universality of this method. More specifically, used tech-niques can be listed as removing or filling usernames and URLs, and converting emojis to text. Further, for uncased pretrained models (e.g. albertxxlarge-v1), all tokens were converted to lower case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.3"
},
{
"text": "In WNUT-2020 Task 2 data set, mention of a user is represented by @USER and a URL is represented by HTTPURL. For all the models except CT-BERT and BERTweet, we removed those mentions. The main reason behind this step is to remove noisy text from data. CT-BERT and BERTweet models are trained on tweet corpora and usernames and URLs are introduced to the models using special fillers. CT-BERT model knows a username as twitteruser and URL as twitterurl. Likewise, BERTweet model used the filler @USER for usernames and HTTPURL for URLs. Therefore, for these two models we used the corresponding fillers to replace usernames and URLs in the data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.3"
},
{
"text": "Emojis are found to play a key role in expressing emotions in the context of social media (Hettiarachchi and Ranasinghe, 2019). But, we cannot assure the existence of embeddings for emojis in pretrained models. Therefore as another essential preprocessing step, we converted emojis to text. For this conversion we used the Python libraries demoji 2 and emoji 3 . demoji returns a normal descriptive text and emoji returns a specifically formatted text. For an example, the conversion of is 'slightly smiling face' using demoji and ':slightly smiling face:' using emoji. For all the models except CT-BERT and BERTweet, we used demoji supported conversion. For CT-BERT and BERTweet emoji supported conversion is used, because these models are trained on correspondingly converted Tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "4.3"
},
{
"text": "To improve the models, we experimented different fine-tuning strategies: majority class self-ensemble, average self-ensemble, entity integration and language modelling, which are described below. 1. Self-Ensemble (SE) -Self-ensemble is found as a technique which result better performance than the performance of a single model (Xu et al., 2020) . In this approach, same model architecture is trained or fine-tuned with different random seeds or train-validation splits. Then the output of each model is aggregated to generate the final results. As the aggregation methods, we analysed majority-class and average in this research. The number of models used with self-ensemble will be denoted by N .",
"cite_spans": [
{
"start": 328,
"end": 345,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4.4"
},
{
"text": "\u2022 Majority-class SE (MSE) -As the majority class, we computed the mode of the classes predicted by each model. Given a data instance, following the softmax layer, a model predicts probabilities for each class and the class with highest probability is taken as the model predicted class. \u2022 Average SE (ASE) -In average SE, final probability of class c is calculated as the average of probabilities predicted by each model as in Equation 2 where h is the final hidden state of the [CLS] token. Then the class with highest probability is selected as the final class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p ASE (c|h) = N k=1 p k (c|h) N",
"eq_num": "(2)"
}
],
"section": "Fine-tuning",
"sec_num": "4.4"
},
{
"text": "2. Entity Integration (EI) -Since we are using pretrained models, there can be model un-known data in the task data set such as person names, locations and organisations. As entity integration, we replaced the unknown tokens with their named entities which are known to the model, so that the familiarity of data to model can be increased. To identify the named entities, we used the pretrained models available with spaCy 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4.4"
},
{
"text": "3. Language Modelling (LM) -As language modelling, we retrained the transformer model on task data set before fine-tuning it for the downstream task; text classification. This training is took place according with the model's initial trained objective. Following this technique model understanding on the task data can be improved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-tuning",
"sec_num": "4.4"
},
{
"text": "In this section, we report the experiments we conducted and their results. As informed by task organisers, we used precision, recall and F1 score calculated for Informative class to measure the model performance. Results in sections 5.1 -5.3 are computed on validation data set and results in section 5.4 are computed on test data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "Initially we focused on the impact by different transformer models. Selected transformer models were fine-tuned for this task using single-model (no ensemble) and MSE with 3 models, and the obtained results are summarised in Table 2 . According to the results, CT-BERT model outperformed the other models. Also, all the models except XL-Net showed improved results with self-ensemble approach than single-model approach. Following these results and considering time and resource constraints, we limited the further experiments only to CT-BERT model.",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 232,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Impact by Transformer Model",
"sec_num": "5.1"
},
{
"text": "We experimented that increasing the epoch count from 3 to 5 increases the results. However, increasing it more than 5 did not further improved the results. Therefore, we used an epoch count of 5 in our experiments. To monitor the evaluation scores against the epoch count we used Wandb app 5 . As shown in the Figure 2 evaluation f1 score does not likely to change when trained with more than five epochs. ",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 318,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Impact by Epoch Count",
"sec_num": "5.2"
},
{
"text": "The fine-tuning strategies mentioned in Section 4.4 were experimented using CT-BERT model and obtained results are summarised in Table 3 . According to the results, in majority of experiments, ASE is given a higher F1 than MSE. The other finestrategies: EI and LM did not improve the results for this data set. As possible reasons for this reduction, having a good knowledge about COVID tweets by the model itself and insufficiency of data for language modelling can be mentioned. Additionally, we analysed the impact by different learning rates. For initial experiments a random learning rate of 1e \u22125 was picked and for further analysis a less value (1e \u22126 ) and a high value (2e \u22125 ) were picked. The value 2e \u22125 was used for pretraining and experiments of CT-BERT model (M\u00fcller et al., 2020) . According to this analysis there is a tendency to have higher F1 with higher learning rates.",
"cite_spans": [
{
"start": 774,
"end": 795,
"text": "(M\u00fcller et al., 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Impact by Fine-tuning",
"sec_num": "5.3"
},
{
"text": "The test data results of our submissions, task baseline and top-ranked system are summarised in Table 4. Considering the evaluation results on validation data set, as InfoMiner 1 we selected the fine-tuned CT-BERT model with ASE and 2e \u22125 learning rate. As InfoMiner 2 same model and parameters with MSE was picked. Among them, the highest F1 we received is for MSE strategy. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set Evaluation",
"sec_num": "5.4"
},
{
"text": "We have presented the system by InfoMiner team for WNUT-2020 Task 2. For this task, we have shown that the CT-BERT is the most successful transformer model from several transformer models we experimented. Furthermore, we presented several fine tuning strategies: self-ensemble, entity integration and language modelling that can improve the results. Overall, our approach is simple but can be considered as effective since it achieved 10 th place in the leader-board.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "As a future direction of this research, we hope to analyse the impact by different classification heads such as LSTM and Convolution Neural Network (CNN) in addition to softmax classifier on performance. Also, we hope to incorporate meta information-based features like number of retweets and likes with currently used textual features to involve social aspect for informative tweet identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "demoji repositoryhttps://github.com/ bsolomon1124/demojis 3 emoji repositoryhttps://github.com/ carpedm20/emoji",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More details about spaCy are available on https:// spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Wandb app is available on https://app.wandb. ai/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Deep learning and word embeddings for tweet classification for crisis response",
"authors": [
{
"first": "Reem",
"middle": [],
"last": "Alrashdi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keefe",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.11024"
]
},
"num": null,
"urls": [],
"raw_text": "Reem ALRashdi and Simon O'Keefe. 2019. Deep learning and word embeddings for tweet clas- sification for crisis response. arXiv preprint arXiv:1903.11024.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Information credibility on twitter",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Mendoza",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web, WWW '11",
"volume": "",
"issue": "",
"pages": "675--684",
"other_ids": {
"DOI": [
"10.1145/1963405.1963500"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of the 20th International Conference on World Wide Web, WWW '11, page 675-684, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic summarization of events from social media",
"authors": [
{
"first": "Freddy",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "Sitaram",
"middle": [],
"last": "Asur",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Chua and Sitaram Asur. 2013. Automatic sum- marization of events from social media.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "ELECTRA: Pretraining text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In ICLR.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Transformer-XL: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2978--2988",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1285"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 2978-2988, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERTweet: A pre-trained language model for English Tweets",
"authors": [],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.10200"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh Vu Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. arXiv preprint, arXiv:2005.10200.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Emoji powered capsule network to detect type and target of offensive posts in social media",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "474--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2019. Emoji powered capsule network to detect type and target of offensive posts in social media. In Pro- ceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 474-480.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Brums at semeval-2020 task 3: Contextualised embeddings for predicting the (graded) effect of context in word similarity",
"authors": [
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
},
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hansi Hettiarachchi and Tharindu Ranasinghe. 2020. Brums at semeval-2020 task 3: Contextualised em- beddings for predicting the (graded) effect of con- text in word similarity. In Proceedings of the 14th International Workshop on Semantic Evalua- tion, Barcelona, Spain. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparing twitter summarization algorithms for multiple post summaries",
"authors": [
{
"first": "D",
"middle": [],
"last": "Inouye",
"suffix": ""
},
{
"first": "J",
"middle": [
"K"
],
"last": "Kalita",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing",
"volume": "",
"issue": "",
"pages": "298--306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Inouye and J. K. Kalita. 2011. Comparing twit- ter summarization algorithms for multiple post sum- maries. In 2011 IEEE Third International Con- ference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on So- cial Computing, pages 298-306.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A deep multi-modal neural network for informative twitter content classification during emergencies",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Jyoti",
"middle": [
"Prakash"
],
"last": "Singh",
"suffix": ""
},
{
"first": "Yogesh",
"middle": [
"K"
],
"last": "Dwivedi",
"suffix": ""
},
{
"first": "Nripendra",
"middle": [
"P"
],
"last": "Rana",
"suffix": ""
}
],
"year": 2020,
"venue": "Annals of Operations Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/s10479-020-03514-x"
]
},
"num": null,
"urls": [],
"raw_text": "Abhinav Kumar, Jyoti Prakash Singh, Yogesh K. Dwivedi, and Nripendra P. Rana. 2020. A deep multi-modal neural network for informative twitter content classification during emergencies. Annals of Operations Research.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bond: Bert-assisted open-domain named entity recognition with distant supervision",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haoming",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Siawpeng",
"middle": [],
"last": "Er",
"suffix": ""
},
{
"first": "Ruijia",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tuo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '20",
"volume": "",
"issue": "",
"pages": "1054--1064",
"other_ids": {
"DOI": [
"10.1145/3394486.3403149"
]
},
"num": null,
"urls": [],
"raw_text": "Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, and Chao Zhang. 2020. Bond: Bert-assisted open-domain named entity recognition with distant supervision. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '20, page 1054-1064, New York, NY, USA. Asso- ciation for Computing Machinery.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Marcel",
"middle": [],
"last": "Salath\u00e9",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Per",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kummervold",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.07503"
]
},
"num": null,
"urls": [],
"raw_text": "Martin M\u00fcller, Marcel Salath\u00e9, and Per E Kummervold. 2020. Covid-twitter-bert: A natural language pro- cessing model to analyse covid-19 content on twitter. arXiv preprint arXiv:2005.07503.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Afshin",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Mai",
"middle": [
"Hoang"
],
"last": "Rahimi",
"suffix": ""
},
{
"first": "Linh",
"middle": [
"The"
],
"last": "Dao",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 6th Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, Afshin Rahimi, Mai Hoang Dao, Linh The Nguyen, and Long Doan. 2020. WNUT-2020 Task 2: Identification of Infor- mative COVID-19 English Tweets. In Proceedings of the 6th Workshop on Noisy User-generated Text.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BRUMS at SemEval-2020 task 12 : Transformer based multilingual offensive language identification in social media",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Hansi Hettiarachchi. 2020. BRUMS at SemEval-2020 task 12 : Transformer based multilingual offensive language identification in social media. In Proceedings of the 14th Interna- tional Workshop on Semantic Evaluation, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Multilingual offensive language identification with cross-lingual embeddings",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe and Marcos Zampieri. 2020. Multilingual offensive language identification with cross-lingual embeddings. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification",
"authors": [
{
"first": "Tharindu",
"middle": [],
"last": "Ranasinghe",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
},
{
"first": "Hansi",
"middle": [],
"last": "Hettiarachchi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 11th annual meeting of the Forum for Information Retrieval Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tharindu Ranasinghe, Marcos Zampieri, and Hansi Hettiarachchi. 2019. BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identification. In Proceed- ings of the 11th annual meeting of the Forum for In- formation Retrieval Evaluation (December 2019).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Twitterstand: News in tweets",
"authors": [
{
"first": "Jagan",
"middle": [],
"last": "Sankaranarayanan",
"suffix": ""
},
{
"first": "Hanan",
"middle": [],
"last": "Samet",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"E"
],
"last": "Teitler",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"D"
],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Sperling",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, GIS '09",
"volume": "",
"issue": "",
"pages": "42--51",
"other_ids": {
"DOI": [
"10.1145/1653771.1653781"
]
},
"num": null,
"urls": [],
"raw_text": "Jagan Sankaranarayanan, Hanan Samet, Benjamin E. Teitler, Michael D. Lieberman, and Jon Sperling. 2009. Twitterstand: News in tweets. In Proceedings of the 17th ACM SIGSPATIAL International Confer- ence on Advances in Geographic Information Sys- tems, GIS '09, page 42-51, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In Chinese Computational Linguistics, pages 194- 206, Cham. Springer International Publishing.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Neural autoregressive distribution estimation",
"authors": [
{
"first": "Benigno",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Marc-Alexandre",
"middle": [],
"last": "C\u00f4t\u00e9",
"suffix": ""
},
{
"first": "Karol",
"middle": [],
"last": "Gregor",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
}
],
"year": 2016,
"venue": "J. Mach. Learn. Res",
"volume": "17",
"issue": "1",
"pages": "7184--7220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benigno Uria, Marc-Alexandre C\u00f4t\u00e9, Karol Gregor, Iain Murray, and Hugo Larochelle. 2016. Neural au- toregressive distribution estimation. J. Mach. Learn. Res., 17(1):7184-7220.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving bert fine-tuning via self-ensemble and self-distillation",
"authors": [
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Ligao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.10345"
]
},
"num": null,
"urls": [],
"raw_text": "Yige Xu, Xipeng Qiu, Ligao Zhou, and Xuanjing Huang. 2020. Improving bert fine-tuning via self-ensemble and self-distillation. arXiv preprint arXiv:2002.10345.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "End-to-end open-domain question answering with BERTserini",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Aileen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luchen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "72--77",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4013"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019a. End-to-end open-domain question answering with BERTserini. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72-77, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019b. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In Advances in neural information processing systems, pages 5753- 5763.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Social context summarization",
"authors": [
{
"first": "Zi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Keke",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Juanzi",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11",
"volume": "",
"issue": "",
"pages": "255--264",
"other_ids": {
"DOI": [
"10.1145/2009916.2009954"
]
},
"num": null,
"urls": [],
"raw_text": "Zi Yang, Keke Cai, Jie Tang, Li Zhang, Zhong Su, and Juanzi Li. 2011. Social context summarization. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '11, page 255-264, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Turkish tweet classification with transformer encoder",
"authors": [
{
"first": "Ya\u015far",
"middle": [],
"last": "At\u0131f Emre Y\u00fcksel",
"suffix": ""
},
{
"first": "Arzucan",
"middle": [],
"last": "Alim T\u00fcrkmen",
"suffix": ""
},
{
"first": "Berna",
"middle": [],
"last": "Ozg\u00fcr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alt\u0131nel",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1380--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "At\u0131f Emre Y\u00fcksel, Ya\u015far Alim T\u00fcrkmen, Arzucan Ozg\u00fcr, and Berna Alt\u0131nel. 2019. Turkish tweet clas- sification with transformer encoder. In Proceed- ings of the International Conference on Recent Ad- vances in Natural Language Processing (RANLP 2019), pages 1380-1387.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Text Classification Architecture",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Evaluation F1 score against the epoch count",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"html": null,
"text": ".",
"content": "<table><tr><td>Data set</td><td colspan=\"2\">Informative Uninformative</td></tr><tr><td>Training</td><td>3303</td><td>3697</td></tr><tr><td colspan=\"2\">Validation 472</td><td>528</td></tr><tr><td>Test</td><td>944</td><td>1056</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF1": {
"html": null,
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF3": {
"html": null,
"text": "Results of different transformer models (All these experiments are executed for 3 learning epochs with 1e \u22125 learning rate.)",
"content": "<table><tr><td colspan=\"2\">Learning R.</td><td/><td>1e \u22125</td><td/><td/><td>1e \u22126</td><td/><td/><td>2e \u22125</td><td/></tr><tr><td>S. 1</td><td>S. 2</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>MSE (N=3)</td><td colspan=\"10\">-EI LM 0.8912 0.9195 0.9051 0.8987 0.9025 0.9006 0.9070 0.9301 0.9184 0.9072 0.9322 0.9195 0.9317 0.8962 0.9136 0.9125 0.9280 0.9202 0.8864 0.9258 0.9057 0.9181 0.9025 0.9103 0.8975 0.9089 0.9032</td></tr><tr><td>ASE (N=3)</td><td colspan=\"10\">-EI LM 0.9021 0.9174 0.9097 0.8971 0.9047 0.9008 0.9160 0.9237 0.9198 0.9091 0.9322 0.9205 0.9295 0.8941 0.9114 0.9146 0.9301 0.9223 0.8960 0.9131 0.9045 0.9124 0.9047 0.9085 0.9025 0.9025 0.9025</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF4": {
"html": null,
"text": "Result obtained for CT-BERT model with different fine-tuning strategies (All these experiments are executed for 5 learning epochs and S. abbreviates the Strategy)",
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF6": {
"html": null,
"text": "Results of test data predictions",
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}