ACL-OCL / Base_JSON /prefixF /json /fnp /2020.fnp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:23:50.599296Z"
},
"title": "LIORI at FinCausal 2020, Tasks 1 & 2",
"authors": [
{
"first": "Adis",
"middle": [],
"last": "Davletov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Denis",
"middle": [],
"last": "Gordeev",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Alexey",
"middle": [],
"last": "Rey",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nikolay",
"middle": [],
"last": "Arefyev",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we describe the results of team LIORI at the FinCausal 2020 Shared task held as a part of the 1st Joint Workshop on Financial Narrative Processing and MultiLingual Financial Summarisation. The shared task consisted of two subtasks: 1) classifying whether a sentence contains any causality and 2) labelling phrases that indicate causes and consequences. We used Transformer-based models with joint-task learning and their voting ensembles. Our team ranked 1st in the first subtask and 4th in the second one.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we describe the results of team LIORI at the FinCausal 2020 Shared task held as a part of the 1st Joint Workshop on Financial Narrative Processing and MultiLingual Financial Summarisation. The shared task consisted of two subtasks: 1) classifying whether a sentence contains any causality and 2) labelling phrases that indicate causes and consequences. We used Transformer-based models with joint-task learning and their voting ensembles. Our team ranked 1st in the first subtask and 4th in the second one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Financial Document Causality Detection Task was devoted to finding causes and consequences in financial news (Mariko et al., 2020) . This task is relevant for information retrieval and economics. This task was focused on causality associated with a financial event while an event was \"defined as the arising or emergence of a new object or context in regard to a previous situation\".",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Mariko et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The shared task consisted of two subtasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Sentence Classification",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It was a binary classification task. The goal of this subtask was to detect whether a sentence displayed any causal meanings or not",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Cause and Effect Detection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This task was a relation detection task. Participants needed to identify \"in a causal sentence or text block the causal elements and the consequential ones\" 1 . This task could be considered as a sequence labelling problem because individual words and phrases corresponded to three labels: cause, consequence, empty label. Each word or character corresponded to only one label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For both tasks simultaneously we used a single Transformer-based model (Vaswani et al., 2017) with two inputs and outputs for each of the tasks respectively. The first task was treated as a classification task with a single label for the input, while for the second the label was predicted for each input word. The training and dataset processing code is published on our GitHub page 2 . Our team ranked 1st in the first subtask and 4th in the second one.",
"cite_spans": [
{
"start": 71,
"end": 93,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http:// creativecommons.org/licenses/by/4.0/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many works devoted to sequence labelling in various domains as it is one of the most popular tasks in Natural Language Processing (NLP). Causality detection in texts is also a very old topic. First works date back to the 80s according to the report by Asghar (Asghar, 2016) . Recently there have appeared works that leverage neural networks against for causality labelling (Li et al., 2019) . The results of neural networks there seem to be in line with the performance for other sequence labelling tasks such as named entity recognition (Ghaddar and Langlais, 2018) for Bi-LSTM models according to paperswithcode.com 3 . For our work, we adopted a Transformer-based approach as it performs the best against current models for sequence labelling and relation extraction. For example, if we look again at named entity recognition (one of the most popular sequence labelling tasks) -at paperswithcode.com 4 , we can see that the top 3 best performing use an attention-based model for Ontonotes v5 and CoNLL 2003. Some recent works have also shown that multi-task learning can produce better results if we have several targets for the same input due to eavesdropping and lower task-bias (Ruder, 2017) , thus discouraging model from over-fitting. Recent competitions, where multi-task models perform well, also prove this point (Dai et al., 2020; Davletov et al., 2020; Gordeev and Lykova, 2020 ).",
"cite_spans": [
{
"start": 269,
"end": 283,
"text": "(Asghar, 2016)",
"ref_id": "BIBREF0"
},
{
"start": 383,
"end": 400,
"text": "(Li et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 548,
"end": 576,
"text": "(Ghaddar and Langlais, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 1194,
"end": 1207,
"text": "(Ruder, 2017)",
"ref_id": "BIBREF9"
},
{
"start": 1334,
"end": 1352,
"text": "(Dai et al., 2020;",
"ref_id": "BIBREF1"
},
{
"start": 1353,
"end": 1375,
"text": "Davletov et al., 2020;",
"ref_id": "BIBREF2"
},
{
"start": 1376,
"end": 1400,
"text": "Gordeev and Lykova, 2020",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The task dataset has been extracted from different 2019 financial news provided by Qwam 5 . The corpus consists of HTML-pages of financial news from 2019. It also contains various financial and legal reports from the SEC Edgar Database ticker list, filtered on financial keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "The texts have been normalized for the research task in the following way:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "\u2022 First, the text was split into sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "\u2022 Then, sentences containing causal elements were identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "\u2022 The document text is then split into passages of consecutive sentences, keeping causally-related sentences in the same passage which are used for binary predictions in the first subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "\u2022 Passages with positive classes are used as the dataset for the second subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "\u2022 The organizers provide the start and end indices for causes and effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "The dataset was split into trial, train and test datasets by the organizers. The trial and train parts contained training labels, while the test part did not include them and was used for ranking. We combined the trial and train parts and used 20% of the combined dataset for validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3"
},
{
"text": "In this work, we went with multitask Transformer-based models for both subtasks. It means that we had two inputs and outputs, for each of the tasks respectively. In this work we tried BERT (Devlin et al., 2018) and ROBERTa (Liu et al., 2019) based models. BERT is a multilingual language model based on self-attention. ROBERTa is a \"robustly optimized\" BERT variant with larger mini-batches and byte-level BPE (byte-pair encodings). In both cases we used English large model variants (bert-large and robertalarge). On top of pre-trained BERT and ROBERTa models, we added two Linear layers with dropout for each of the tasks. Cross-entropy was used for training the models. Thus, we had two loss functions (for each of the output layers) that were weighted and concatenated. All used models were provided by Hugging Face (Wolf et al., 2019) . Our combined loss function can be seen below, where L a is the first subtask loss and L b is the second subtask loss.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 223,
"end": 241,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 820,
"end": 839,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "L a = \u2212 1 m m j=1 Nc i=1 y i \u2022 log(\u0177 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "where m is the number of samples in the batch, y i is the target value,\u0177 i -our predicted value and N c is the number of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "L b = \u2212 1 m m i=1 1 N j N j j=1 Nc c=1 y c \u2022 log(\u0177 c )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "where m is the number of samples in the batch, N j is the number of tokens in the batch, N c is the number of NER classes,\u0177 c -the predicted NER class and y c is the target value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "L = \u03bb a L a + \u03bb b L b ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "where \u03bb are scalar weights for the loss functions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "All padded words and non-labeled words (and their resulting tokens) were excluded from loss function calculation and not included into N j , while special '[SEP]' and '[CLS]' tokens were included.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "While training models for the first subtask we tested a number of weighting schemes ranging between 2 and 0 for sequence labelling subtask loss. However, for the second subtask, the weights for text classification loss were set to zero which makes the model equivalent to a general sequence labelling model. We also tried various sequence labelling formats of the second subtask input: BIO (beginning, inside, outside) and BIEO (beginning, inside, end, outside). Learning rates in the range between 5e \u2212 06 and 5e \u2212 05 were tested. Dropout coefficients were tested from 0.1 to 0.2. For the first subtask, there were also provided the results for ensembles of the best 3, 4 and 5 performing models according to the validation dataset. Simple voting ensembles were used. We used a system with 2 NVidia RTX2080 GPUs and Google Colab to train all models. Table we provide the results for only the best and the worst 3 models and of the ensembles of the top-N performing models. The results are sorted from the bottom to the top.",
"cite_spans": [],
"ref_spans": [
{
"start": 851,
"end": 859,
"text": "Table we",
"ref_id": null
}
],
"eq_spans": [],
"section": "Solution",
"sec_num": "4"
},
{
"text": "For the first subtask, the organizers used F1-score. For the second subtask, the metric is a weighted average F1 score, where the F1 score of each class is balanced by the number of items in each class (see (Mariko et al., 2020) ).",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Mariko et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In the first subtask our final model achieved F1 equal to 0.977 on the leaderboard (the next participant's score is 0.975 F1), in the second subtask our result was 0.826 F1 with the winning solution having 0.947 F1. The results of individual models and their hyperparameters can be seen in Tables 1 and 2 for each of the subtasks respectively. As can be seen from Table 1 for subtask 1 ROBERTa robustly outperforms BERT for the first subtask. The best top-3 single models are ROBERTa-based with various hyperparameters. It can also be seen that sequence loss improves model results, but the best models have their weights scaled down by 0.1. It also should be noted that the difference between all individual models is small and the difference between the best and the worst-performing ones is less than 0.1 F-1-score point. For the first subtask, we also tried an ensemble of 3,4 and 5 best performing individual models. The increase in the number of the used best models consistently improved the results. Thus, it may be also beneficial to train other types of models or to increase the number of models in an ensemble.",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 304,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
},
{
"start": 364,
"end": 371,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Paradoxically, for the second subtask BERT-based models consistently outperform ROBERTa based ones. Moreover, the difference is much larger and constitutes more than 0.7 F1-score points. We did not try ensemble-based models for the second subtask. It also can be seen that all our models tend to overfit to the training and validation datasets. A more robust training scheme such as k-fold cross validation might be of benefit here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "This paper describes the results of team LIORI at the FinCausal 2020 Shared task held as a part of the 1st Joint Workshop on Financial Narrative Processing and MultiLingual Financial Summarisation. The shared task consisted of two subtasks: classifying whether a sentence contains any causality and labelling phrases which indicate causes and consequences. Transformer-based models with joint-task learning were used. In this paper we show that different model architectures perform better for different subtasks and that joint-task learning might improve results for some subtasks. However, it also results in slight overfitting for sequence labelling task and might require further investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://wp.lancs.ac.uk/cfie/fincausal2020/ 2 https://github.com/InstituteForIndustrialEconomics/fincausal-2020",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5 4 https://paperswithcode.com/task/named-entity-recognition-ner 5 http://www.qwamci.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the organisers of the competition for such an inspiring task. We are grateful to our reviewers for their useful suggestions. The contribution of Nikolay Arefyev to the paper was partially done within the framework of the HSE University Basic Research Program funded by the Russian Academic Excellence Project '5-100'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic extraction of causal relations from natural language texts: a comprehensive survey",
"authors": [
{
"first": "Nabiha",
"middle": [],
"last": "Asghar",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1605.07895"
]
},
"num": null,
"urls": [],
"raw_text": "Nabiha Asghar. 2016. Automatic extraction of causal relations from natural language texts: a comprehensive survey. arXiv preprint arXiv:1605.07895.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Kungfupanda at semeval-2020 task 12: Bertbased multi-task learning for offensive language detection",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Tiezheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zihan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Pascale",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.13432"
]
},
"num": null,
"urls": [],
"raw_text": "Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at semeval-2020 task 12: Bert- based multi-task learning for offensive language detection. arXiv preprint arXiv:2004.13432.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Renersans: Relation extraction and named entity recognition as sequence annotation",
"authors": [
{
"first": "Adis",
"middle": [],
"last": "Davletov",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Gordeev",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Rey",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Arefyev",
"suffix": ""
}
],
"year": 2020,
"venue": "Computational Linguistics and Intellectual Technologies",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adis Davletov, Denis Gordeev, Alexey Rey, and Nikolay Arefyev. 2020. Renersans: Relation extraction and named entity recognition as sequence annotation. In Computational Linguistics and Intellectual Technologies, pages 187-197.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirec- tional Transformers for Language Understanding. oct.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Robust lexical features for improved neural network named-entity recognition",
"authors": [
{
"first": "Abbas",
"middle": [],
"last": "Ghaddar",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Langlais",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.03489"
]
},
"num": null,
"urls": [],
"raw_text": "Abbas Ghaddar and Philippe Langlais. 2018. Robust lexical features for improved neural network named-entity recognition. arXiv preprint arXiv:1806.03489.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert of all trades, master of some",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Gordeev",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Lykova",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying",
"volume": "",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Gordeev and Olga Lykova. 2020. Bert of all trades, master of some. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 93-98.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Causality extraction based on self-attentive bilstm-crf with transferred embeddings",
"authors": [
{
"first": "Zhaoning",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaotian",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Jiangtao",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.07629"
]
},
"num": null,
"urls": [],
"raw_text": "Zhaoning Li, Qi Li, Xiaotian Zou, and Jiangtao Ren. 2019. Causality extraction based on self-attentive bilstm-crf with transferred embeddings. arXiv preprint arXiv:1904.07629.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach. arxiv.org",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arxiv.org.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hugues de Mazancourt, and Mahmoud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020)",
"authors": [
{
"first": "Dominique",
"middle": [],
"last": "Mariko",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Abi Akl",
"suffix": ""
},
{
"first": "Estelle",
"middle": [],
"last": "Labidurie",
"suffix": ""
},
{
"first": "Stephane",
"middle": [],
"last": "Durfort",
"suffix": ""
}
],
"year": null,
"venue": "The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominique Mariko, Hanna Abi Akl, Estelle Labidurie, Stephane Durfort, Hugues de Mazancourt, and Mah- moud El-Haj. 2020. The Financial Document Causality Detection Shared Task (FinCausal 2020). In The 1st Joint Workshop on Financial Narrative Processing and MultiLing Financial Summarisation (FNP-FNS 2020, Barcelona, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An overview of multi-task learning in",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "deep neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.05098"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Adv. Neural Inf. Process. Syst",
"volume": "2017",
"issue": "",
"pages": "5999--6009",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Adv. Neural Inf. Process. Syst., volume 2017-Decem, pages 5999-6009.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "HuggingFace's Transformers: State-ofthe-art Natural Language Processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of- the-art Natural Language Processing. ArXiv, abs/1910.0.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Model results for Subtask 1: Sentence Classification. In the",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "Cause and Effect Detection. In theTable,there are provided the results for only the best and the worst 3 models. The results are sorted from the bottom to the top.",
"num": null,
"content": "<table><tr><td>Test</td><td>Validation</td><td>Model</td><td>Target</td><td>Learning</td><td>Text Loss</td><td>Sequence</td><td>Dropout</td></tr><tr><td>Score</td><td>Score</td><td/><td>Format</td><td>Rate</td><td>Weight</td><td>Loss</td><td>Rate</td></tr><tr><td/><td/><td/><td/><td/><td/><td>Weight</td><td/></tr><tr><td colspan=\"3\">0.754986 0.872582 roberta</td><td>bio</td><td>0.0001</td><td>0.0</td><td>1.0</td><td>0.1</td></tr><tr><td>0.76584</td><td>0.82897</td><td>roberta</td><td>bio</td><td>0.0001</td><td>0.0</td><td>1.0</td><td>0.2</td></tr><tr><td colspan=\"3\">0.794089 0.865707 roberta</td><td>bio</td><td>9e-05</td><td>0.0</td><td>1.0</td><td>0.2</td></tr><tr><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td><td>...</td></tr><tr><td colspan=\"3\">0.823952 0.898873 bert</td><td>bio</td><td>0.0001</td><td>0.0</td><td>1.0</td><td>0.2</td></tr><tr><td colspan=\"3\">0.824818 0.894067 bert</td><td>bio</td><td>7e-05</td><td>0.0</td><td>1.0</td><td>0.2</td></tr><tr><td colspan=\"3\">0.826049 0.906328 bert</td><td>bio</td><td>0.0001</td><td>0.0</td><td>1.0</td><td>0.1</td></tr><tr><td colspan=\"3\">Table 2: Model results for Subtask 2:</td><td/><td/><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}