ACL-OCL / Base_JSON /prefixE /json /econlp /2021.econlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:45.609602Z"
},
"title": "Is Domain Adaptation Worth Your Investment? Comparing BERT and FinBERT on Financial Tasks",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Peng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Yuk Choi Road 11, Hung Hom",
"settlement": "Kowloon, Hong Kong"
}
},
"email": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Yuk Choi Road 11, Hung Hom",
"settlement": "Kowloon, Hong Kong"
}
},
"email": ""
},
{
"first": "Yu-Yin",
"middle": [],
"last": "Hsu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Yuk Choi Road 11, Hung Hom",
"settlement": "Kowloon, Hong Kong"
}
},
"email": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Yuk Choi Road 11, Hung Hom",
"settlement": "Kowloon, Hong Kong"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "With the recent rise in popularity of Transformer models in Natural Language Processing, research efforts have been dedicated to the development of domain-adapted versions of BERT-like architectures. In this study, we focus on FinBERT, a Transformer model trained on text from the financial domain. By comparing its performances with the original BERT on a wide variety of financial text processing tasks, we found continual pretraining from the original model to be the more beneficial option. Domain-specific pretraining from scratch, conversely, seems to be less effective.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "With the recent rise in popularity of Transformer models in Natural Language Processing, research efforts have been dedicated to the development of domain-adapted versions of BERT-like architectures. In this study, we focus on FinBERT, a Transformer model trained on text from the financial domain. By comparing its performances with the original BERT on a wide variety of financial text processing tasks, we found continual pretraining from the original model to be the more beneficial option. Domain-specific pretraining from scratch, conversely, seems to be less effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Transformer architectures have taken the field of Natural Language Processing (NLP) by storm, leading to remarkable performance leaps in several tasks (Vaswani et al., 2017; Devlin et al., 2019) .",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF34"
},
{
"start": 178,
"end": 198,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first-generation Transformers were mainly trained on general corpora, such as Wikipedia or Common Crawl. However, considering domain adaptations, many researchers have later injected domain-specific knowledge in such architectures, leading to the publication of Transformers trained on different types of in-domain text, e.g., scientific articles (Beltagy et al., 2019) , biomedical text Gu et al., 2020) , clinical notes (Alsentzer et al., 2019) , and patent corpora (Lee and Hsiang, 2020) .",
"cite_spans": [
{
"start": 351,
"end": 373,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 392,
"end": 408,
"text": "Gu et al., 2020)",
"ref_id": null
},
{
"start": 426,
"end": 450,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 472,
"end": 494,
"text": "(Lee and Hsiang, 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since language technologies have seen increasingly frequent use in accounting and finance (Loughran and McDonald, 2016) , it is not surprising that several attempts have been made to adapt Transformers to the financial domain (Araci, 2019; Yang et al., 2020; Liu et al., 2020) .",
"cite_spans": [
{
"start": 90,
"end": 119,
"text": "(Loughran and McDonald, 2016)",
"ref_id": "BIBREF24"
},
{
"start": 226,
"end": 239,
"text": "(Araci, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 240,
"end": 258,
"text": "Yang et al., 2020;",
"ref_id": "BIBREF38"
},
{
"start": 259,
"end": 276,
"text": "Liu et al., 2020)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we test the FinBERT model by Yang et al. (2020) on a variety of tasks in the field of financial NLP, including sentiment analysis, causality detection, numeral understanding, and numeral attachment, and we study the impact of different types of pretraining on the system performance. We obtained the best results with a Fin-BERT model with pretraining continuing from the original BERT and with the same general-domain vocabulary, while a model trained anew on financial corpora and with a domain-adapted vocabulary performed similarly to BERT Base.",
"cite_spans": [
{
"start": 44,
"end": 62,
"text": "Yang et al. (2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although financial NLP is a relatively recent field, it already has an active research community, which has regularly introduced new shared tasks and benchmarks in recent years, e.g., sentence boundary detection in financial documents (Azzi et al., 2019; Wan et al., 2019; Au et al., 2021) , hypernymy detection , document causality detection (Mariko et al., 2020) , document structure extraction (Juge et al., 2019; Bentabet et al., 2020) , and document summarization (Zheng et al., 2020) . Given the success of Transformer models in general-domain NLP, it is not surprising that they are also a popular choice for many systems competing in financial tasks (Chen et al., 2020) .",
"cite_spans": [
{
"start": 235,
"end": 254,
"text": "(Azzi et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 255,
"end": 272,
"text": "Wan et al., 2019;",
"ref_id": "BIBREF35"
},
{
"start": 273,
"end": 289,
"text": "Au et al., 2021)",
"ref_id": "BIBREF2"
},
{
"start": 343,
"end": 364,
"text": "(Mariko et al., 2020)",
"ref_id": null
},
{
"start": 397,
"end": 416,
"text": "(Juge et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 417,
"end": 439,
"text": "Bentabet et al., 2020)",
"ref_id": "BIBREF5"
},
{
"start": 469,
"end": 489,
"text": "(Zheng et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 658,
"end": 677,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To adapt the original BERT to sentiment analysis in the financial domain, Araci (2019) was the first to propose a FinBERT model by further pretraining BERT Base on the financial subset of the Reuters TRC2 corpus. The evaluation, carried out on the Financial Phrase Bank (Malo et al., 2014) and the FiQA sentiment scoring dataset (Maia et al., 2018) , demonstrated that FinBERT largely outperformed all the LSTM-based baselines and was slightly better than the original model. The second FinBERT model, introduced by Yang et al. (2020), followed two different training strategies. The first version (FinBERT Base-Vocab) was further pretrained from a BERT Base checkpoint on three financial corpora (i.e., the Corporate Reports 10-K & 10-Q from the Securities Exchange Commission, 1 the Earnings Call Transcripts from the Seeking Alpha website, 2 and the Analyst Reports from the Investext database), and the second (FinBERT FinVocab) was trained afresh on the same three corpora but with new vocabulary specific to the financial domain, not inheriting it from the original BERT. They evaluated the models on the same sentiment analysis datasets, in conjunction with the opinion mining data from Huang et al. 2014, and reported improved performance over BERT Base, especially when using the FinBERT model with the domain-adapted vocabulary.",
"cite_spans": [
{
"start": 270,
"end": 289,
"text": "(Malo et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 329,
"end": 348,
"text": "(Maia et al., 2018)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this study, we chose to test the FinBERT system used in Yang et al. (2020) , which has two publicly available versions, in order to directly compare the impact of the two different domain adaptation strategies and to evaluate them on more semantic tasks. The previous studies (Araci, 2019; Yang et al., 2020) focused their evaluation exclusively on sentiment analysis. However, sentiment analysis is a general task that is not necessarily ideal for observing the advantages of domain adaptation because the expressions of sentiment might not reflect the in-domain language. For example, in the biomedical domain, several tasks have recently been shown to benefit from training from scratch on an in-domain text and from a domainspecific vocabulary (Gu et al., 2020; Portelli et al., 2021) . Therefore, besides sentiment analysis, we decided to evaluate our models on three semantic tasks that are more specific to the financial domain: document causality detection (Mariko et al., 2020) , numeral understanding (Chen et al., 2019b) , and numeral attachment (Chen et al., 2020) .",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "Yang et al. (2020)",
"ref_id": "BIBREF38"
},
{
"start": 279,
"end": 292,
"text": "(Araci, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 293,
"end": 311,
"text": "Yang et al., 2020)",
"ref_id": "BIBREF38"
},
{
"start": 751,
"end": 768,
"text": "(Gu et al., 2020;",
"ref_id": null
},
{
"start": 769,
"end": 791,
"text": "Portelli et al., 2021)",
"ref_id": "BIBREF33"
},
{
"start": 968,
"end": 989,
"text": "(Mariko et al., 2020)",
"ref_id": null
},
{
"start": 1014,
"end": 1034,
"text": "(Chen et al., 2019b)",
"ref_id": "BIBREF8"
},
{
"start": 1060,
"end": 1079,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The following section describes all the tasks related to the study, and the datasets to be evaluated. Descriptive statistics for the latter are provided in Table 1 . More details about the class distributions are in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Tasks and Datasets",
"sec_num": "3.1"
},
{
"text": "Sentiment Analysis stands out as one of the most popular tasks in NLP. To compare our models in the financial domain, we selected three different datasets. The Financial PhraseBank (Malo et al., 2014 ) is a standard dataset for sentiment classification composed of 4,840 sentences selected from financial news and annotated for Positive, Negative, and Neutral sentiment by 16 different annotators with experience in the financial domain. The dataset comes with the original annotations: for our study, we evaluated on a subset of 2,264 instances with at least 75% of annotator agreement.",
"cite_spans": [
{
"start": 181,
"end": 199,
"text": "(Malo et al., 2014",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.1.1"
},
{
"text": "We also used the FinTextSen dataset from Se-mEval 2017 Task 5 that dedicates itself to sentiment analysis on financial microblogs (Cortis et al., 2017) . The dataset consists of 2,488 microblog messages retrieved from Twitter and StockTwits in March 2016. Each instance contains the following information: the message, a cashtag, and a sentiment score. The latter was originally a continuous score, but we used the dataset version by Daudert et al. (2018) , who clustered the scores to obtain a 3class annotation (Positive, Negative, and Neutral), to maintain consistency with the other sets.",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "(Cortis et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 434,
"end": 455,
"text": "Daudert et al. (2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.1.1"
},
{
"text": "Finally, the StockSen dataset (Xing et al., 2020 ) is composed of 20,675 financial tweets extracted from the StockTwits platform between June and August 2019, all of which were annotated with either Positive or Negative sentiments.",
"cite_spans": [
{
"start": 30,
"end": 48,
"text": "(Xing et al., 2020",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentiment Analysis",
"sec_num": "3.1.1"
},
{
"text": "For Document Causality Detection, we used the dataset of the FinCausal shared task 2020 (Mariko et al., 2020) . The dataset is made of texts extracted from a 2019 corpus of financial news provided by Qwan, with each instance annotated with binary labels to indicate whether it described a causal relation. For example, in (1), the italicized part was annotated as the cause for the fall of the GDP.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Mariko et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Financial Document Causality Detection",
"sec_num": "3.1.2"
},
{
"text": "(1) Things got worse when the Wall came down. GDP fell 20% between 1988 and 1993.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Financial Document Causality Detection",
"sec_num": "3.1.2"
},
{
"text": "We refer to the dataset for subtask 1, which is a simple binary classification task (class 1 if the text includes a causal relation and 0 otherwise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Financial Document Causality Detection",
"sec_num": "3.1.2"
},
{
"text": "Understanding numerals is of key importance for the automatic processing of financial documents. In coincidence with the FinNum shared task, Chen et al. 2019b (Malo et al., 2014) 2,264 \\ \\ 3 81 FinTextSen (Daudert et al., 2018) 2,488 \\ \\ 3 476 StockSen (Xing et al., 2020) 14,457 6,218 \\ 2 370 Causality Detection (Mariko et al., 2020) 13,478 \\ 8,580 2 1,460 FinNum-1 subtask 1/2 (Chen et al., 2019b) 4,072 457 786 7/17 48 FinNum-2 (Chen et al., 2019a) 7,187 2,109 1,044 2 120 fine-grained classes, which are sub-classes of the same categories. The labels have been identified based on the taxonomy by Chen et al. (2018) , and the annotation was carried out by two domain experts. The dataset only includes examples on which the annotators reached an agreement. Examples (2a) and (2b) illustrate, respectively, the Monetary and the Product/Version category (the numeral expression to be classified is in bold).",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(Malo et al., 2014)",
"ref_id": "BIBREF26"
},
{
"start": 205,
"end": 227,
"text": "(Daudert et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 253,
"end": 272,
"text": "(Xing et al., 2020)",
"ref_id": "BIBREF37"
},
{
"start": 314,
"end": 335,
"text": "(Mariko et al., 2020)",
"ref_id": null
},
{
"start": 380,
"end": 400,
"text": "(Chen et al., 2019b)",
"ref_id": "BIBREF8"
},
{
"start": 432,
"end": 452,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF6"
},
{
"start": 602,
"end": 620,
"text": "Chen et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Numeral Understanding",
"sec_num": "3.1.3"
},
{
"text": "(2) a. $FB (110.20) is starting to show some relative strength and signs of potential B/O on the daily. b. iPhone 6 may not be as secure as Apple thought.. $AAPL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeral Understanding",
"sec_num": "3.1.3"
},
{
"text": "We address both the subtasks of FinNum (e.g., the 7-class and the 17-class classification tasks); that is, the tweets containing n financial numbers and the corresponding category labels will be copied n times. The details of the reconstructed data are also illustrated in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 280,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Numeral Understanding",
"sec_num": "3.1.3"
},
{
"text": "The numeral attachment task was introduced during the FinNum-2 competition (Chen et al., 2019a) . The authors built a dataset of financial microblogs extracted from StockTwits, in which, given a target cashtag and a target numeral, a system predicts whether the numeral is attached to the cashtag. For example, in (3), the second numeral in the sentence is attached to the $NE cashtag, while the first one is not.",
"cite_spans": [
{
"start": 75,
"end": 95,
"text": "(Chen et al., 2019a)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Numeral Attachment",
"sec_num": "3.1.4"
},
{
"text": "(3) $NE, last time oil was over $65 you were close to $8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeral Attachment",
"sec_num": "3.1.4"
},
{
"text": "Therefore, for each instance, the system must perform a binary classification task (i.e., 1 if the numeral is attached to the cashtag, and 0 otherwise).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Numeral Attachment",
"sec_num": "3.1.4"
},
{
"text": "In this study, two baseline models were used. One is the BERT Base (Devlin et al., 2019) , which consists of a series of stacked Transformer encoders. It was trained using both a masked language modeling objective and a next sentence prediction objective on a concatenation of the Books Corpus (Zhu et al., 2015) and the English version of Wikipedia. The other one is a traditional Support Vector Machine (SVM) baseline (Noble, 2006) , where the input representation is the element-wise addition of the word vectors of each word in the sentence. We used the publicly available FastText vectors by Grave et al. (2018) . As for the FinBERT models, we used FinBERT BaseVocab (FV w/ BV) and FinBERT FinVocab (FB w/ FV) (Yang et al., 2020) . The former was initialized from the original BERT Base (i.e., it also uses the same general-domain vocabulary) and then further pretrained on financial corpora, and the latter was trained afresh on financial corpora for 1M iterations and uses a domain-specific financial vocabulary.",
"cite_spans": [
{
"start": 67,
"end": 88,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 294,
"end": 312,
"text": "(Zhu et al., 2015)",
"ref_id": "BIBREF41"
},
{
"start": 420,
"end": 433,
"text": "(Noble, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 597,
"end": 616,
"text": "Grave et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 715,
"end": 734,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "Following the methodology by Devlin et al. (2019) , all models used a linear layer with sof tmax as a classification layer and the crossentropy loss as a loss function. The texts were directly fed to the models after some simple preprocessing steps. For all models, we replaced the URLs with the special token [URL] . For the Numeral Understanding task, the texts and the target numbers were concatenated with the special token [SEP] after the tokenization. Finally, in the Numeral Attachment task, we followed Moreno et al. (2020) by adding the special tokens \u00a3 and \u00a7 to the beginning and the end of the $cashtag, and the target number, respectively.",
"cite_spans": [
{
"start": 29,
"end": 49,
"text": "Devlin et al. (2019)",
"ref_id": "BIBREF14"
},
{
"start": 310,
"end": 315,
"text": "[URL]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "All the models have been evaluated in terms of Macro F1-score and Micro F1-score. In this study, Table 3 : Performance gaps for each dataset and metric. In the last two lines, we also report the aggregate performance for the group of sentiment analysis datasets (Financial Phrase Bank, FinTextSen and StockSen) and for the numeral understanding ones (FinNum-1 subtask 1 and 2).",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.3"
},
{
"text": "Datasets SVM FB w/ BV FB w/ FV BERT Base Micro-F1(%) Macro-F1 (%) Micro-F1(%) Macro-F1 (%) Micro-F1 (%) Macro-F1 (%) Micro-F1 (%) Macro-F1 (%) Financial",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.3"
},
{
"text": "the latter is equivalent to the traditional Accuracy metric, due to treating each task as a multi-class classification task. For the datasets without an official train-test split (e.g., FinTextSen and Financial Phrase Bank), we ran a 10-fold cross-validation and reported the average score. However, due to the instability of BERT fine-tuning on small datasets , even the results of multiple runs on the same split may heavily fluctuate. Therefore, we reported the average scores after 10 runs, even for the datasets with an official train-test split.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "3.3"
},
{
"text": "The full results are shown in Table 2 . Firstly, we observe that all the pretrained BERT models outperformed the SVM baseline in all the financial datasets. Secondly, many models reported large standard deviations on some of the datasets, especially the sentiment analysis ones. It can be observed that FinBERT BaseVocab reports the best performance in almost all the datasets, generally outperforming BERT Base. Excluding the Fin-TextSen dataset, in which BERT Base is the topscoring model, FinBERT BaseVocab achieves an average increase of 0.85 of Macro F1-score on the other benchmarks. On the other hand, Fin-BERT FinVocab performed similarly to BERT Base, sometimes showing small improvements and sometimes lagging behind the original model. It achieved the top score only in the numeral attachment task and in causality detection, the latter only for the Micro-F1. 3 Moreover, the performance increase for FinBERT BaseVocab was more noticeable on the datasets on numerals, while the performances of FinBERT FinVocab were more irregular, performing slightly better than BERT Base and the BaseVocab model on FinNum2 (numeral attachment), but lagging behind both on FinNum subtask 1 (numeral understanding). Table 3 summarizes the performance comparison between the Transformer models, where it can be seen that FinBert BaseVocab typically improves over the other models for both metrics (the Fin-TextSen dataset being the only exception). However, it should also be noticed that the differences between models are sometimes small compared to the standard deviations in Table 2 , which invites to be cautious in drawing firm conclusions.",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1211,
"end": 1218,
"text": "Table 3",
"ref_id": null
},
{
"start": 1573,
"end": 1580,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "We ran a qualitative error analysis of the instances that were misclassified by our models for the tasks of sentiment analysis, numeral attachment, and ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.1"
},
{
"text": "When they signed up in 2008, the government invested R52million to fund the workers shares.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "FinNum-2 0 All",
"sec_num": null
},
{
"text": "The existing $500 -$600 billion of public support for agriculture must be redirected to more inclusive, resilient and low carbon production and innovative technologies and finance to enhance the resilience of small-scale producers. causality detection. Table 4 displays some of the examples that we extracted. For Sentiment Analysis, we extracted some misclassified examples from StockSen and noticed that the polarity of some tweets is mistaken by the classifiers because of irony, such as the final exclamation awesome on the first row in Table 4 . In some other cases, like the one on the second row, the words associated with a negative polarity (e.g., risk, crashing) might be misleading the systems, while the tweet is actually positive.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 260,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 541,
"end": 548,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Causality Detection 0 All",
"sec_num": null
},
{
"text": "In the numeral attachment task, where the target cashtag is in bold, and the target numeral in italics, the models seem to experience problems in assigning the correct interpretations to numerals, especially when they appear in temporal adjuncts (e.g., the examples on the third and the fourth rows).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Causality Detection 0 BertBase",
"sec_num": null
},
{
"text": "The error sources seem to be more varied and more difficult to identify in the causality detection task. However, we encountered a few cases like the examples on the fifth and the sixth rows, where a to-infinitive construction is used for expressing goals. Given the semantic similarity between cause and goal, it seems plausible that the construction has confused the classifiers, leading them to erroneously assign the instances to the positive class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Causality Detection 0 BertBase",
"sec_num": null
},
{
"text": "In this paper, we compared the original BERT model with the financially adapted models by Yang et al. (2020) . Domain adaptation was generally confirmed to be beneficial and, unlike what has been recently observed in the biomedical domain (Gu et al., 2020; Portelli et al., 2021) , the model benefiting from continuous pretraining from BERT Base showed more consistent improvements across tasks and datasets. This suggests that the models take advantage from exposure to financial text, but the tasks do not necessarily require a specialized vocabulary. On the negative side, fluctations in the results confirmed that there is some degree of instability in the fine-tuning of BERT-like models on relatively small datasets .",
"cite_spans": [
{
"start": 90,
"end": 108,
"text": "Yang et al. (2020)",
"ref_id": "BIBREF38"
},
{
"start": 239,
"end": 256,
"text": "(Gu et al., 2020;",
"ref_id": null
},
{
"start": 257,
"end": 279,
"text": "Portelli et al., 2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "In our future work, we plan to investigate also the contextualized embeddings produced by the domain-adapted Transformers. Word embeddings have been used in tasks with important applications in the financial domain, such as the identification of semantic relations (Chersoni et al., 2016; Xiang et al., 2020) , which is useful for building domain ontologies Chersoni and Huang, 2021) , and the unsupervised detection of semantic changes in diachronic data, e.g., annual reports of traded companies (Giulianelli et al., 2020; Masson and Montariol, 2021) . In this perspective, a promising research direction would be to analyze how different domain adaptation strategies affect the quality of the embedding representations. ",
"cite_spans": [
{
"start": 265,
"end": 288,
"text": "(Chersoni et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 289,
"end": 308,
"text": "Xiang et al., 2020)",
"ref_id": "BIBREF36"
},
{
"start": 358,
"end": 383,
"text": "Chersoni and Huang, 2021)",
"ref_id": "BIBREF10"
},
{
"start": 498,
"end": 524,
"text": "(Giulianelli et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 525,
"end": 552,
"text": "Masson and Montariol, 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://www.sec.gov/edgar.shtml. 2 https://seekingalpha.com/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It should be pointed out that in the Causality data the class distribution is very unbalanced, with almost 93% of negative instances (see Appendix A), and thus Macro-F1 is a more reliable score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Tobias Daudert, Frank Xing, and Chung-Chi Chen for sharing their datasets with us, and the four anonymous reviewers for their insightful feedback. This research was made possible by the University Postdoc Matching Fund (W16H) and Project of Strategic Importance (ZE2J) at the Hong Kong Polytechnic University.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Figure 1 shows the pie charts illustrating the distribution of classes for all the benchmark datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 9,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly Available Clinical BERT Embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Willie",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the NAACL Workshop on Clinical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John R Murphy, Willie Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In Proceedings of the NAACL Workshop on Clinical Natural Language Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "FinBERT: Financial Sentiment Analysis with Pre-trained Language Models",
"authors": [
{
"first": "Dogu",
"middle": [],
"last": "Araci",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10063"
]
},
"num": null,
"urls": [],
"raw_text": "Dogu Araci. 2019. FinBERT: Financial Sentiment Analysis with Pre-trained Language Models. arXiv preprint arXiv:1908.10063.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "FinSBD-2021: The 3rd Shared Task on Structure Boundary Detection in Unstructured Text in the Financial Domain",
"authors": [
{
"first": "Willy",
"middle": [],
"last": "Au",
"suffix": ""
},
{
"first": "Abderrahim",
"middle": [],
"last": "Ait-Azzi",
"suffix": ""
},
{
"first": "Juyeon",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2021,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willy Au, Abderrahim Ait-Azzi, and Juyeon Kang. 2021. FinSBD-2021: The 3rd Shared Task on Struc- ture Boundary Detection in Unstructured Text in the Financial Domain. In Companion Proceedings of the Web Conference.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The FinSBD-2019 Shared Task: Sentence Boundary Detection in Pdf Noisy Text in the Financial Domain",
"authors": [
{
"first": "Houda",
"middle": [],
"last": "Abderrahim Ait Azzi",
"suffix": ""
},
{
"first": "Sira",
"middle": [],
"last": "Bouamor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ferradans",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abderrahim Ait Azzi, Houda Bouamor, and Sira Fer- radans. 2019. The FinSBD-2019 Shared Task: Sen- tence Boundary Detection in Pdf Noisy Text in the Financial Domain. In Proceedings of the IJCAI Workshop on Financial Technology and Natural Lan- guage Processing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SciB-ERT: A Pretrained Language Model for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.10676"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. arXiv preprint arXiv:1903.10676.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Financial Document Structure Extraction Shared Task (FinToc 2020)",
"authors": [
{
"first": "",
"middle": [],
"last": "Najah-Imane",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Bentabet",
"suffix": ""
},
{
"first": "Ismail",
"middle": [
"El"
],
"last": "Juge",
"suffix": ""
},
{
"first": "Virginie",
"middle": [],
"last": "Maarouf",
"suffix": ""
},
{
"first": "Dialekti",
"middle": [],
"last": "Mouilleron",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Valsamou-Stanislawski",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "El-Haj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Najah-Imane Bentabet, R\u00e9mi Juge, Ismail El Maarouf, Virginie Mouilleron, Dialekti Valsamou- Stanislawski, and Mahmoud El-Haj. 2020. The Financial Document Structure Extraction Shared Task (FinToc 2020). In Proceedings of the Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Numeral Attachment with Auxiliary Tasks",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "1161--1164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2019a. Numeral Attachment with Auxiliary Tasks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1161-1164.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Numeral Understanding in Financial Tweets for Fine-grained Crowd-based Forecasting",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yow-Ting",
"middle": [],
"last": "Shiue",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE/WIC/ACM International Conference on Web Intelligence (WI)",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, Yow-Ting Shiue, and Hsin-Hsi Chen. 2018. Numeral Understanding in Financial Tweets for Fine-grained Crowd-based Forecasting. In IEEE/WIC/ACM International Con- ference on Web Intelligence (WI), pages 136-143. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Overview of the NTCIR-14 FinNum Task: Fine-grained Numeral Understanding in Financial Social Media Data",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura, and Hsin-Hsi Chen. 2019b. Overview of the NTCIR- 14 FinNum Task: Fine-grained Numeral Under- standing in Financial Social Media Data. In Pro- ceedings of the 14th NTCIR Conference on Evalu- ation of Information Access Technologies, pages 19- 27.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the NTCIR-15 FinNum-2 Task: Numeral Attachment in Financial Tweets",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hiroya",
"middle": [],
"last": "Takamura",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Development",
"volume": "850",
"issue": "194",
"pages": "1--044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura, and Hsin-Hsi Chen. 2020. Overview of the NTCIR- 15 FinNum-2 Task: Numeral Attachment in Finan- cial Tweets. Development, 850(194):1-044.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "PolyU-CBS at the FinSim-2 Task: Combining Distributional, String-Based and Transformers-Based Features for Hypernymy Detection in the Financial Domain",
"authors": [
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2021,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "316--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuele Chersoni and Chu-Ren Huang. 2021. PolyU-CBS at the FinSim-2 Task: Combining Dis- tributional, String-Based and Transformers-Based Features for Hypernymy Detection in the Financial Domain. In Companion Proceedings of the Web Conference, pages 316-319.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "CogALex-V Shared Task: ROOT18",
"authors": [
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Giulia",
"middle": [],
"last": "Rambelli",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the COLING Workshop on Cognitive Aspects of the Lexicon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuele Chersoni, Giulia Rambelli, and Enrico Santus. 2016. CogALex-V Shared Task: ROOT18. In Proceedings of the COLING Workshop on Cogni- tive Aspects of the Lexicon.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semeval-2017 Task 5: Fine-grained Sentiment Analysis on Financial Microblogs and News",
"authors": [
{
"first": "Keith",
"middle": [],
"last": "Cortis",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Daudert",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Huerlimann",
"suffix": ""
},
{
"first": "Manel",
"middle": [],
"last": "Zarrouk",
"suffix": ""
},
{
"first": "Siegfried",
"middle": [],
"last": "Handschuh",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Davis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keith Cortis, Andr\u00e9 Freitas, Tobias Daudert, Manuela Huerlimann, Manel Zarrouk, Siegfried Handschuh, and Brian Davis. 2017. Semeval-2017 Task 5: Fine-grained Sentiment Analysis on Financial Mi- croblogs and News. In Proceedings of SemEval.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Leveraging News Sentiment to Improve Microblog Sentiment Classification in the Financial Domain",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Daudert",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Buitelaar",
"suffix": ""
},
{
"first": "Sapna",
"middle": [],
"last": "Negi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the EMNLP Workshop on Economics and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Daudert, Paul Buitelaar, and Sapna Negi. 2018. Leveraging News Sentiment to Improve Microblog Sentiment Classification in the Financial Domain. In Proceedings of the EMNLP Workshop on Eco- nomics and Natural Language Processing.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of NAACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The FinSim 2020 Shared Task: Learning Semantic Representations for the Financial Domain",
"authors": [
{
"first": "Ismail",
"middle": [
"El"
],
"last": "Maarouf",
"suffix": ""
},
{
"first": "Youness",
"middle": [],
"last": "Mansar",
"suffix": ""
},
{
"first": "Virginie",
"middle": [],
"last": "Mouilleron",
"suffix": ""
},
{
"first": "Dialekti",
"middle": [],
"last": "Valsamou-Stanislawski",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ismail El Maarouf, Youness Mansar, Virginie Mouilleron, and Dialekti Valsamou-Stanislawski. 2021. The FinSim 2020 Shared Task: Learning Semantic Representations for the Financial Domain. In Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Processing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Analysing Lexical Semantic Change with Contextualised Word Representations",
"authors": [
{
"first": "Mario",
"middle": [],
"last": "Giulianelli",
"suffix": ""
},
{
"first": "Marco",
"middle": [
"Del"
],
"last": "Tredici",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mario Giulianelli, Marco Del Tredici, and Raquel Fer- n\u00e1ndez. 2020. Analysing Lexical Semantic Change with Contextualised Word Representations. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning Word Vectors for 157 Languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning Word Vectors for 157 Languages. In Proceedings of LREC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific Language Model Pretraining for Biomedical Natural Language Processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.15779"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific Language Model Pretraining for Biomedi- cal Natural Language Processing. arXiv preprint arXiv:2007.15779.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Evidence on the Information Content of Text in Analyst Reports",
"authors": [
{
"first": "H",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Amy",
"middle": [
"Y"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2014,
"venue": "The Accounting Review",
"volume": "89",
"issue": "6",
"pages": "2151--2180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen H Huang, Amy Y Zang, and Rong Zheng. 2014. Evidence on the Information Content of Text in An- alyst Reports. The Accounting Review, 89(6):2151- 2180.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The FinToc-2019 Shared Task: Financial Document Structure Extraction",
"authors": [
{
"first": "R\u00e9mi",
"middle": [],
"last": "Juge",
"suffix": ""
},
{
"first": "Imane",
"middle": [],
"last": "Bentabet",
"suffix": ""
},
{
"first": "Sira",
"middle": [],
"last": "Ferradans",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Financial Narrative Processing Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R\u00e9mi Juge, Imane Bentabet, and Sira Ferradans. 2019. The FinToc-2019 Shared Task: Financial Document Structure Extraction. In Proceedings of the Finan- cial Narrative Processing Workshop.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Patent Classification by Fine-tuning BERT Language Model",
"authors": [
{
"first": "Jieh-Sheng",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jieh",
"middle": [],
"last": "Hsiang",
"suffix": ""
}
],
"year": 2020,
"venue": "World Patent Information",
"volume": "61",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieh-Sheng Lee and Jieh Hsiang. 2020. Patent Clas- sification by Fine-tuning BERT Language Model. World Patent Information, 61:101965.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BioBERT: A Pre-trained Biomedical Language Representation Model for Biomedical Text Mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: A Pre-trained Biomedical Language Representation Model for Biomedical Text Mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining",
"authors": [
{
"first": "Zhuang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Degen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Kaiyu",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhuang",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuang Liu, Degen Huang, Kaiyu Huang, Zhuang Li, and Jun Zhao. 2020. FinBERT: A Pre-trained Finan- cial Language Representation Model for Financial Text Mining. In Proceedings of IJCAI.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Textual Analysis in Accounting and Finance: A Survey",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Loughran",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Accounting Research",
"volume": "54",
"issue": "4",
"pages": "1187--1230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Loughran and Bill McDonald. 2016. Textual Anal- ysis in Accounting and Finance: A Survey. Journal of Accounting Research, 54(4):1187-1230.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "WWW'18 Open Challenge: Financial Opinion Mining and Question Answering",
"authors": [
{
"first": "Macedo",
"middle": [],
"last": "Maia",
"suffix": ""
},
{
"first": "Siegfried",
"middle": [],
"last": "Handschuh",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [],
"last": "Freitas",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Mcdermott",
"suffix": ""
},
{
"first": "Manel",
"middle": [],
"last": "Zarrouk",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Balahur",
"suffix": ""
}
],
"year": 2018,
"venue": "Companion Proceedings of The Web Conference",
"volume": "",
"issue": "",
"pages": "1941--1942",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Macedo Maia, Siegfried Handschuh, Andr\u00e9 Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. 2018. WWW'18 Open Chal- lenge: Financial Opinion Mining and Question An- swering. In Companion Proceedings of The Web Conference, pages 1941-1942.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts",
"authors": [
{
"first": "Pekka",
"middle": [],
"last": "Malo",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "65",
"issue": "4",
"pages": "782--796",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wal- lenius, and Pyry Takala. 2014. Good Debt or Bad Debt: Detecting Semantic Orientations in Economic Texts. Journal of the Association for Information Science and Technology, 65(4):782-796.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The FinSim-2 2021 Shared Task: Learning Semantic Similarities for the Financial Domain",
"authors": [
{
"first": "Youness",
"middle": [],
"last": "Mansar",
"suffix": ""
},
{
"first": "Juyeon",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Ismail",
"middle": [
"El"
],
"last": "Maarouf",
"suffix": ""
}
],
"year": 2021,
"venue": "Companion Proceedings of the Web Conference",
"volume": "",
"issue": "",
"pages": "288--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Youness Mansar, Juyeon Kang, and Ismail El Maarouf. 2021. The FinSim-2 2021 Shared Task: Learn- ing Semantic Similarities for the Financial Domain. In Companion Proceedings of the Web Conference, pages 288-292.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Yagmur Ozturk",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Mariko",
"suffix": ""
},
{
"first": "Estelle",
"middle": [],
"last": "Labidurie",
"suffix": ""
}
],
"year": null,
"venue": "Hanna Abi Akl, and Hugues de Mazancourt. 2020. Data Processing and Annotation Schemes for FinCausal Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.02498"
]
},
"num": null,
"urls": [],
"raw_text": "Dominique Mariko, Estelle Labidurie, Yagmur Oz- turk, Hanna Abi Akl, and Hugues de Mazan- court. 2020. Data Processing and Annotation Schemes for FinCausal Shared Task. arXiv preprint arXiv:2012.02498.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Detecting Omissions of Risk Factors in Company Annual Reports",
"authors": [
{
"first": "Corentin",
"middle": [],
"last": "Masson",
"suffix": ""
},
{
"first": "Syrielle",
"middle": [],
"last": "Montariol",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corentin Masson and Syrielle Montariol. 2021. De- tecting Omissions of Risk Factors in Company An- nual Reports. In Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Pro- cessing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Variations in Word Usage for the Financial Domain",
"authors": [
{
"first": "Syrielle",
"middle": [],
"last": "Montariol",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Allauzen",
"suffix": ""
},
{
"first": "Asanobu",
"middle": [],
"last": "Kitamoto",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the IJCAI Workshop on Financial Technology and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Syrielle Montariol, Alexandre Allauzen, and Asanobu Kitamoto. 2021. Variations in Word Usage for the Financial Domain. In Proceedings of the IJCAI Workshop on Financial Technology and Natural Lan- guage Processing.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "TLR at the NTCIR-15 FinNum-2 Task: Improving Text Classifiers for Numeral Attachment in Financial Social Data",
"authors": [
{
"first": "G",
"middle": [],
"last": "Jose",
"suffix": ""
},
{
"first": "Emanuela",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Doucet",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 15th NTCIR Conference on Evaluation of Information Access Technologies",
"volume": "",
"issue": "",
"pages": "8--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jose G Moreno, Emanuela Boros, and Antoine Doucet. 2020. TLR at the NTCIR-15 FinNum-2 Task: Im- proving Text Classifiers for Numeral Attachment in Financial Social Data. In Proceedings of the 15th NTCIR Conference on Evaluation of Information Ac- cess Technologies, Tokyo, pages 8-11.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "What is a Support Vector Machine?",
"authors": [
{
"first": "S",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Noble",
"suffix": ""
}
],
"year": 2006,
"venue": "Nature Biotechnology",
"volume": "24",
"issue": "12",
"pages": "1565--1567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William S Noble. 2006. What is a Support Vector Ma- chine? Nature Biotechnology, 24(12):1565-1567.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "BERT Prescriptions to Avoid Unwanted Headaches: A Comparison of Transformer Architectures for Adverse Drug Event Detection",
"authors": [
{
"first": "Beatrice",
"middle": [],
"last": "Portelli",
"suffix": ""
},
{
"first": "Edoardo",
"middle": [],
"last": "Lenzi",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Serra",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beatrice Portelli, Edoardo Lenzi, Emmanuele Cher- soni, Giuseppe Serra, and Enrico Santus. 2021. BERT Prescriptions to Avoid Unwanted Headaches: A Comparison of Transformer Architectures for Ad- verse Drug Event Detection. In Proceedings of EACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attention Is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sentence Boundary Detection of Financial Data with Domain Knowledge Enhancement and Cross-lingual Training",
"authors": [
{
"first": "Mingyu",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Rong",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Klyueva",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ahrens",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "David",
"middle": [
"Clive"
],
"last": "Broadstock",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Hing Wah",
"middle": [],
"last": "Yung",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of The First Workshop on Financial Technology and Natural Language Processing: The FinSBD Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingyu Wan, Rong Xiang, Emmanuele Chersoni, Natalia Klyueva, Kathleen Ahrens, Bin Miao, David Clive Broadstock, Jian Kang, Hing Wah Yung, and Chu-Ren Huang. 2019. Sentence Boundary Detection of Financial Data with Domain Knowl- edge Enhancement and Cross-lingual Training. In Proceedings of The First Workshop on Financial Technology and Natural Language Processing: The FinSBD Shared Task.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The CogALex Shared Task on Monolingual and Multilingual Identification of Semantic Relations",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Emmanuele",
"middle": [],
"last": "Chersoni",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Iacoponi",
"suffix": ""
},
{
"first": "Enrico",
"middle": [],
"last": "Santus",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the COLING Workshop on Cognitive Aspects of the Lexicon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong Xiang, Emmanuele Chersoni, Luca Iacoponi, and Enrico Santus. 2020. The CogALex Shared Task on Monolingual and Multilingual Identification of Se- mantic Relations. In Proceedings of the COLING Workshop on Cognitive Aspects of the Lexicon.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Financial Sentiment Analysis: An Investigation into Common Mistakes and Silver Bullets",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Malandri",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Xing, Lorenzo Malandri, Yue Zhang, and Erik Cambria. 2020. Financial Sentiment Analysis: An Investigation into Common Mistakes and Silver Bul- lets. In Proceedings of COLING.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "FinBERT: A Pretrained Language Model for Financial Communications",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Christopher Siy",
"suffix": ""
},
{
"first": "Allen",
"middle": [],
"last": "Uy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.08097"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Mark Christopher Siy Uy, and Allen Huang. 2020. FinBERT: A Pretrained Language Model for Financial Communications. arXiv preprint arXiv:2006.08097.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Revisiting Fewsample BERT Fine-tuning",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Arzoo",
"middle": [],
"last": "Katiyar",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting Few- sample BERT Fine-tuning. In Proceedings of ICLR.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "SUMSUM@ FNS-2020 Shared Task",
"authors": [
{
"first": "Siyan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Anneliese",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siyan Zheng, Anneliese Lu, and Claire Cardie. 2020. SUMSUM@ FNS-2020 Shared Task. In Proceed- ings of the Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books",
"authors": [
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "19--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 19-27.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Class distribution for each of the evaluation datasets.",
"type_str": "figure"
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "Descriptive statistics for all the experimental datasets: train and test splits, classes, max text length.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Datasets</td><td colspan=\"6\">FB w/ BV vs. BERT Base FB w/ FV vs. BERT Base FB w/ BV vs. FB w/ FV Micro-F1(%) Macro-F1(%) Micro-F1(%) Macro-F1(%) Micro-F1(%) Macro-F1(%)</td></tr><tr><td>Financial Phrase Bank</td><td>0.26</td><td>0.36</td><td>0.09</td><td>0.24</td><td>0.17</td><td>0.12</td></tr><tr><td>FinTextSen</td><td>1.44</td><td>-4.02</td><td>0.04</td><td>-3.49</td><td>1.4</td><td>-0.53</td></tr><tr><td>StockSen</td><td>0.76</td><td>0.91</td><td>-2.35</td><td>0.6</td><td>3.11</td><td>0.31</td></tr><tr><td>Causality Detection</td><td>0.04</td><td>0.14</td><td>0.27</td><td>0.03</td><td>-0.23</td><td>0.11</td></tr><tr><td>FinNum-1 subtask 1</td><td>0.31</td><td>1.37</td><td>-0.56</td><td>-0.93</td><td>0.87</td><td>2.3</td></tr><tr><td>FinNum-1 subtask 2</td><td>0.72</td><td>1.31</td><td>-0.67</td><td>1.26</td><td>1.39</td><td>0.05</td></tr><tr><td>FinNum-2</td><td>0.6</td><td>1.05</td><td>0.71</td><td>1.33</td><td>-0.11</td><td>-0.28</td></tr><tr><td>Sentiment Analysis</td><td>0.82</td><td>-0.92</td><td>-0.74</td><td>-0.88</td><td>1.56</td><td>-0.03</td></tr><tr><td>Numeral Understanding</td><td>0.52</td><td>1.34</td><td>-0.62</td><td>0.17</td><td>1.13</td><td>1.18</td></tr></table>",
"text": "Comparative results in terms of Micro-F1 and Macro-F1 (top scores per dataset/metric are in bold), with standard deviations for the BERT models.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>Text instance</td><td>Task</td><td colspan=\"2\">Golden Label Misclassified by</td></tr><tr><td colspan=\"2\">$AAPL StockSen</td><td>1</td><td>FB w/ BV</td></tr><tr><td>\u00a3$HMNY\u00a3 it's over. No one is going back. Once people get a</td><td>FinNum-2</td><td>0</td><td>All</td></tr><tr><td>deal. \u00a730 \u00a7 years ago I sold Toyotas for full sticker only. The</td><td/><td/><td/></tr><tr><td>world changes!</td><td/><td/><td/></tr><tr><td>\u00a3$SPY\u00a3 Tax reform scam is code word for bailout. After \u00a78 \u00a7</td><td/><td/><td/></tr><tr><td>years, the CBs are still pumping. They want to transfer wealth.</td><td/><td/><td/></tr><tr><td>Don't let them.</td><td/><td/><td/></tr></table>",
"text": "Force in VWAP is strong with this one.....no break since it fell below.....awesomeStockSen 0 All$GOOG $AMZN $FB Trump is not going to do anything to these companies. He wouldn t risk crashing the market before the election. That anti-trust talk is just smoke and mirrors.",
"html": null,
"num": null
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"text": "Error cases for different tasks, together with the right label and the models that misclassified the instance.",
"html": null,
"num": null
}
}
}
}