ACL-OCL / Base_JSON /prefixE /json /econlp /2021.econlp-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:53:50.040240Z"
},
"title": "The Global Banking Standards QA Dataset (GBS-QA)",
"authors": [
{
"first": "Kyunghwan",
"middle": [],
"last": "Sohn",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sunjae",
"middle": [],
"last": "Kwon",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Jaesik",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A domain specific question answering (QA) dataset dramatically improves the machine comprehension performance. This paper presents a new Global Banking Standards QA dataset (GBS-QA) in the banking regulation domain. The GBS-QA has three values. First, it contains actual questions from market players and answers from global rule setter, the Basel Committee on Banking Supervision (BCBS) in the middle of creating and revising banking regulations. Second, financial regulation experts analyze and verify pairs of questions and answers in the annotation process. Lastly, the GBS-QA is a totally different dataset with existing datasets in finance and is applicable to stimulate transfer learning research in the banking regulation domain.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "A domain specific question answering (QA) dataset dramatically improves the machine comprehension performance. This paper presents a new Global Banking Standards QA dataset (GBS-QA) in the banking regulation domain. The GBS-QA has three values. First, it contains actual questions from market players and answers from global rule setter, the Basel Committee on Banking Supervision (BCBS) in the middle of creating and revising banking regulations. Second, financial regulation experts analyze and verify pairs of questions and answers in the annotation process. Lastly, the GBS-QA is a totally different dataset with existing datasets in finance and is applicable to stimulate transfer learning research in the banking regulation domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Global banking standards have been strictly organized and gradually evolved over time to reflect the changes of financial environment and new risks emerged. The Basel Committee on Banking Supervision (BCBS) which is one of international rule setters governs to create and revise the international banking standards in cooperation with member countries and international financial supervisory agencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Creating new rules and revising current standards require clear communications with related parties such as financial supervisory authorities and market players over the globe. In the process of this work, Frequently Asked Questions (FAQs) are formally constructed and disclosed in order to help understand better how to implement provisions in practice. The FAQs contains questions from market players and answers from the BCBS. However, original pairs of questions and answers in the FAQs are not standardized so that they need to be re-organized and revised in order to become qualified practices on the corresponding provisions. Considering that financial regulations are directly influential to the market and they require expertise to interpret and understand the context in the FAQs, financial regulation experts should be involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a new Global Banking Standards QA dataset (GBS-QA). The GBS-QA has three values. First, given that GBS-QA is composed of actual questions and answers in practice, both market players and regulators find it quite useful at work as additional guidelines. Second, financial regulation experts participate in this study to construct revised QA pairs which can be properly applied into the NLP model. Third, the GBS-QA is a totally different dataset with existing datasets in finance and is applicable to stimulate transfer learning research in the banking regulation domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is constructed as follows. Related work is reviewed in Section 2. The process of collecting and annotating the GBS-QA is described in Section 3. The experimental results of the pretrained language model and its domain-adaptive model on GBS-QA are revealed in Section 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been many researches on constructing new datasets in various domains and utilizing them on their own purpose. In finance, financial news sentiment dataset (Ding et al., 2015) and financial opinion mining and questions answering dataset (Chen et al., 2020) are widely utilized. However, since these are basically rooted from financial news and general narratives in financial reports, they are not equipped with terminology in financial regulation and of course are not qualified for doing tasks in financial regulation domain.",
"cite_spans": [
{
"start": 166,
"end": 185,
"text": "(Ding et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 247,
"end": 266,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Aside from construction of new datasets, how to make AI-enabled NLP models be more understandable and explainable is one of top priorities. Recently centered on medical domain, text entailment(Abacha and Demner-Fushman, 2016; Fei (Han et al., 2021) are applied to do question answering tasks and combination of these approaches and input medical text contributes to achieve the goal of enhancing interpretability and explainability. From a perspective of few-shot setting, few-shot textual entailment framework combined with graphical structure in data also shows its effectiveness in the downstream tasks (Yin et al., 2020) .",
"cite_spans": [
{
"start": 230,
"end": 248,
"text": "(Han et al., 2021)",
"ref_id": "BIBREF7"
},
{
"start": 606,
"end": 624,
"text": "(Yin et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In addition, domain-adaptive pretraining is proposed as one of the guaranteed way to lead to performance gains (Ding et al., 2015) in domainspecific tasks. FinBERT (Andrew and Gao, 2007) and BioBERT (Lee et al., 2020) , for example, are types of domain-adaptive models.",
"cite_spans": [
{
"start": 111,
"end": 130,
"text": "(Ding et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 164,
"end": 186,
"text": "(Andrew and Gao, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 199,
"end": 217,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "GBS-QA is collected from the BCBS website 1 . The BCBS framework comprises the 14 standards and each standard has its provisions and the associated FAQs which are matched with the corresponding provisions. As shown in Figure 1 , Risk-based capital requirements (RBC) which is one of the standards contains 4 sub-categories and each subcategory has a varying number of provisions with different length. The provision of 30.12, for example, raises two questions from market players so that the BCBS answers these questions with detailed explanations.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 226,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "GBS-QA",
"sec_num": "3"
},
{
"text": "Overview of a GBS-QA construction process follows Figure 2 . Starting from BCBS framework, all standards and the associated FAQs are automatically filtered out of the BCBS website. These data are organized and preprocessed by human annotation process. Human annotation includes organizing provisions, matching the provisions with the corresponding FAQs and reviewing questions and answers in a reconciled manner. This review is conducted by independent annotators which consists of five financial regulation experts. From the human annotation process, pairs of questions and answers are classified into four types which include 1) Binary answerable type, 2) WH type, 3) How type and 4) Conditional type. Upon question classification, questions are revised into \"Binary answerable type\" and answers are labelled into \"Yes\" or \"No\" to corresponding questions according to the GBS-QA classification guideline in Appendix A. After completing the whole process in Figure 2 , GBS-QA is constructed as a set of questions and answers in banking regulation domain. ",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 959,
"end": 967,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "GBS-QA",
"sec_num": "3"
},
{
"text": "The BCBS website provides a well-structured platform. From web crawling, data is collected by filtering \"basel_navigations_standard_selection\", \"basel_paragraphs\", \"faqs_and_footnotes\" and \"basel_faq\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic filtering of standards, questions and answers",
"sec_num": "3.1"
},
{
"text": "Human annotation mainly covers question classification and answer classification. This annotation process is conducted by five financial experts independently. Since it requires expertise in interpreting international banking regulations, annotators should be financial regulation experts who are qualified for this task. In the beginning, guideline is set to make it transparent and consistent for them to annotate questions and answers. The guideline is described in Appendix A. In compliance with the guideline, reviewing all pairs of questions and answers is implemented and then classifying questions and answers is followed. After constructing a dataset of questions and answers individually, five outcomes are compared and pairs are chosen throughout majority voting. Kappa statistics (Gururangan et al., 2020) is used as quantitative metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Annotation",
"sec_num": "3.2"
},
{
"text": "Questions raised in the process of creating or revising standards are intended to make the corresponding provisions clearer with additional explanations in order to help market players to implement them compliantly. Given that answers are directly influential to market, they are supposed to be transparent and concise based on attached references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification",
"sec_num": "3.2.1"
},
{
"text": "Considering the characteristics of pairs of questions and answers in the BCBS framework, we reach to a conclusion that if all the questions are revised to be answered with \"Yes\" or \"No\", the revised questions can be added to the corresponding provisions and ultimately become additional provisions. The question of \"What is the difference between (the jurisdiction of) \"ultimate risk\" and (the jurisdiction of) \"immediate counterparty\" exposures?\", for example, can be transformed into \"Is it correct that the difference between \"ultimate risk\" and \"immediate counterparty\" exposures means that ... ?\" and the meaning phrase or sentence becomes additional information. The point is to determine if certain behavior is compliant with regulation or not and answers in the FAQs can be regarded as qualified provisions interpreted from the BCBS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification",
"sec_num": "3.2.1"
},
{
"text": "To transform the questions to be answerable with \"Yes\" or \"No, question type should be analyzed and classified in the beginning. In this paper, we propose four question types which include 1) Binary answerable type, 2) WH type, 3) How type and 4) Conditional type. The definitions of four types are as follows. Examples are described in Appendix B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question classification",
"sec_num": "3.2.1"
},
{
"text": "A question which is answerable with \"Yes\" or \"No\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type1 : Binary answerable type",
"sec_num": "1."
},
{
"text": "A question which starts with \"What\", \"Where\", \"Why\" and \"When\" or has same nuance to all \"WH\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type2 : WH type",
"sec_num": "2."
},
{
"text": "3. Type3 : How type A question which starts with \"How\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type2 : WH type",
"sec_num": "2."
},
{
"text": "A question which is not answerable with \"Yes\" or \"No\" in general. However, if certain condition is met, this question becomes Type1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "[Example]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "[Q] Can subordinated loans be included in regulatory capital?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "[A] As long as the subordinated loans meet all the criteria required for Additional Tier 1 or Tier capital, it is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "According to the annotation guideline in Appendix A, questions in Type2, Type3 and Type4 are transformed into new questions which can be answerable with \"Yes\" or \"No\" as same as Type1. Type2 and Type3 have specific phrase or sentence as the corresponding answer. In this case, the questions are simply revised into the sentence structure of \"Is it correct that ...\" or \"Does it mean that ...\". Type4 can be answerable as same as Type2 if certain conditions are met. In this regard, the condition captured from the context of the answer is put in front of or at the end of the original question with \"if\", \"in cases where\" and \"as long as\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "In line with question classification, answer classification is implemented by five annotators independently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Answer classification",
"sec_num": "3.2.2"
},
{
"text": "Total number of pairs of questions and answers in the GBS-QA is 186. Question type distribution is shown in Table 1 . Type1 accounts for 39% at the highest ratio and Type4 secondly ranks with 30% in the GBS-QA. The coverage ratio of \"Yes\" is 70%, whearas that of \"No\" is 30%. ",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 115,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Classification result",
"sec_num": "3.2.3"
},
{
"text": "The GBS-QA is quite different from other datasets in finance as shown in Figure 3 . Comparing top 4 words among three datasets, GBS-QA contains the words of \"capital\", \"risk\", \"bank\", \"Tier\" which are closely related to banking regulations whereas FiQA includes \"stock\", \"money\", \"company\" and \"credit\" and PhraseBank covers \"company\", \"profit\", \"net\" and \"sales\". FiQA focuses on stock market and PhraseBank deals with corporate financial performance. In addition, in top 100 words, jaccard similarity of GBS-QA with the other two datasets remarks 0.10 and 0.05 respectively. Especially, the top 10 words in GBS-QA do not have any significant intersection with others. It implies that words distribution of GBS-QA is far away from the existing datasets in finance.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Comparison with other datasets in finance",
"sec_num": "3.3"
},
{
"text": "One of core parts in constructing the GBS-QA is how to revise the original questions to be answerable with \"Yes\" or \"No\" without significant loss of information in the original pairs of questions and answers. It is quite difficult to measure how much information is lost in this transforming process. However, if it is guaranteed that revised questions contain correct phrases or sentences in the original answers, it could tell that there is little loss of information in a qualitative manner. Among four types of questions proposed in the section 3.2.1, Type3 (How type) and Type4 (conditional type) are much harder to transform the original questions because they require complete understanding on the context of answers in order to extract correct phrases or sentences concisely. In this regard, before disclosing the GBS-QA, it needs more effort to thoroughly review all QA pairs. Eventually the GBS-QA should be confirmed by the BCBS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Important issues on GBS-QA",
"sec_num": "3.4"
},
{
"text": "The experiments aim to show that GBS-QA can be applicable to stimulate transfer learning on the classification task. Considering uniqueness and expertise of GBS-QA, domain-adaptive language models which are trained with all provisions in current standards and all answers are used to do this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4"
},
{
"text": "As shown in Figure 4 , it starts from pre-trained models such as RoBERTa (Liu et al., 2019) and ELECTRA (Clark et al., 2020) . By training domain knowledge of banking standards, pre-trained models are evolved into post-trained model which is qualified for dealing with GBS-QA. Next, these post-trained models are fine-tuned for classification task with pairs of questions and answers (\"Yes\" or \"No) and the associated provisions. The final model is called BankReg QA model. ",
"cite_spans": [
{
"start": 73,
"end": 91,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 104,
"end": 124,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 384,
"end": 398,
"text": "(\"Yes\" or \"No)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4"
},
{
"text": "Due to lack of data points in the GBS-QA, there is no significant difference in experimental results between pre-trained and post-trained model in the GBS-QA. Moreover, some questions have longer length than 512 which is maximum length in the model and the other questions require interpretation of equations to calculate risk factors or risk exposure so that the models can not handle them properly. However, the post-trained model shows slightly lower performance in both FiQA and PhraseBank dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We propose a new QA dataset which is called GBS-QA in the banking regulation domain. The GBS-QA contains pairs of questions and answers re-organized and verified by financial regulation experts. Extra information extracted from the GBS-QA can become qualified practices on the associated provisions in the regulation. From the analysis and experiments, the GBS-QA shows that it has a different words distribution compared to other existing datasets in finance and it can be applicable in the NLP models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "First of all, GBS-QA consists of high quality but relatively small number of QA pairs. Therefore, it is necessary to expand the size of the dataset. However, it requires tremendous cost to label largescale data by qualified financial experts. Semisupervised approaches can be one possible way to constuct large-scale data with minimal cost. We can consider to use a bootstrapping that iteratively expands data based on good quality seed data. In addition, we can utilize a relationship among provisions. This relationship can be drawn as a graph and the graphical structure can be used in training and fine-tuning. Lastly, studies on text entailment and non monotonic reasoning are applicable to analyzing and understanding financial regulation in the NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "sentence is an answer to the corresponding question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "7"
},
{
"text": "A question which starts with \"How\". Interpretation or explanation is an answer to the corresponding question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type3 : How type",
"sec_num": "3."
},
{
"text": "A question which is not answerable with \"Yes\" or \"No\" in general. However, if certain condition is met, this question becomes Type1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "Question classification is solely dependent on experts' viewpoint and interpretation from scratch. In order to verify the classification results from five independent annotators, it follows majority voting and reliability test with Kappa statistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "One of the biggest challenges is how to revise questions into Type1. To save time and effort, one leading annotator does transform the questions in Type2, Type3 and Type4. Type2 and Type3 follows that firstly the phrase or sentence or a set of sentences in the context of the corresponding answer is identified and new question is constructed with the sentence structure of \"Is it correct that ...\" or \"Does it mean that ...\". These structures are considered quite simple. However, we all agree that it is one of the most effective and intuitively understandable approach. Type4 is revised by adding the condition captured in the context of the corresponding answer in front of or at the end of the questions with \"If + condition\". Upon question classification, answers are labelled into \"Yes\" or \"No\" at the end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "B Examples of question types B.1 Type1: Binary answerable type",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type4 : Conditional type",
"sec_num": "4."
},
{
"text": "Regarding CAP10.11(16), consider a bank that issues capital out of a foreign subsidiary, and wishes to use such capital to meet both the solo requirements of the foreign subsidiary and include the capital in the consolidated capital of the group. Is it correct that the relevant authority in jurisdiction of the consolidated supervisor must have the power to trigger writedown / conversion of the instrument in addition to the relevant authority in the jurisdiction of the foreign subsidiary?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "original question",
"sec_num": "1."
},
{
"text": "2. original answer Yes, this is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "original question",
"sec_num": "1."
},
{
"text": "What is the difference between (the jurisdiction of) \"ultimate risk\" and (the jurisdiction of) \"immediate counterparty\" exposures?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Type2: WH type original question",
"sec_num": null
},
{
"text": "original answer The concepts of \"ultimate risk\" and \"immediate risk\" are those used by the BIS' International Banking Statistics . The jurisdiction of \"immediate counterparty\" refers to the jurisdiction of residence of immediate counterparties, while the jurisdiction of \"ultimate risk\" is where the final risk lies. For the purpose of the countercyclical capital buffer, banks should use, where possible, exposures on an \"ultimate risk\" basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 Type2: WH type original question",
"sec_num": null
},
{
"text": "Is it correct that the jurisdiction of \"immediate counterparty\" refers to the jurisdiction of residence of immediate counterparties, while the jurisdiction of \"ultimate risk\" is where the final risk lies?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "revised question",
"sec_num": null
},
{
"text": "revised answer Yes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "revised question",
"sec_num": null
},
{
"text": "How is the final bank-specific buffer add-on calculated?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 Type3: How type original question",
"sec_num": null
},
{
"text": "original answer The final bank-specific buffer add-on amount is calculated as the weighted average of the countercyclical capital buffer add-on rates applicable in the jurisdiction(s) in which a bank has private sector credit exposures (including the bank's home jurisdiction) multiplied by total risk-weighted assets. The weight for the buffer add-on rate applicable in a given jurisdiction is the credit risk charge that relates to private sector credit exposures allocated to that jurisdiction, divided by the bank's total credit risk charge that relates to private sector credit exposures across all jurisdictions. Where the private sector credit exposures (as defined in RBC30.13(FAQ1)) to a jurisdiction, including the home jurisdiction, are zero, the weight to be allocated to the particular jurisdiction would be zero.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.3 Type3: How type original question",
"sec_num": null
},
{
"text": "Is it correct that the final bank-specific buffer addon amount is calculated as the weighted average of the countercyclical capital buffer add-on rates applicable in the jurisdiction(s) in which a bank has private sector credit exposures (including the bank's home jurisdiction) multiplied by total riskweighted assets? revised answer Yes B.4 Type4: Conditional type original question Can subordinated loans be included in regulatory capital? original answer Yes. As long as the subordinated loans meet all the criteria required for Additional Tier 1 or Tier 2 capital, banks can include these items in their regulatory capital. revised question As long as the subordinated loans meet all the criteria required for Additional Tier 1 or Tier 2 capital, can subordinated loans be included in regulatory capital? revised answer Yes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "revised question",
"sec_num": null
},
{
"text": "https://www.bis.org/basel_framework/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This annotation guideline contains a set of process to do question classification and answer classification. Question classification means that every question is categorized into one of pre-determined types considering context of questions and answers in a reconciled manner. The four types are as follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Annotation guideline",
"sec_num": null
},
{
"text": "A question which is answerable with \"Yes\" or \"No\" without critical difficulty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type1 : Binary answerable type",
"sec_num": "1."
},
{
"text": "A question which starts with \"What\", \"Where\", \"Why\" and \"When\" or has similar nuance to all \"WH\". Specific phrase or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Type2 : WH type",
"sec_num": "2."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Recognizing question entailment for medical question answering",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2016,
"venue": "AMIA Annual Symposium Proceedings",
"volume": "2016",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha and Dina Demner-Fushman. 2016. Recognizing question entailment for medical ques- tion answering. In AMIA Annual Symposium Pro- ceedings, volume 2016, page 310. American Medi- cal Informatics Association.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Scalable training of L1-regularized log-linear models",
"authors": [
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 24th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Galen Andrew and Jianfeng Gao. 2007. Scalable train- ing of L1-regularized log-linear models. In Proceed- ings of the 24th International Conference on Ma- chine Learning, pages 33-40.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fine-grained financial opinion mining: A survey and research agenda",
"authors": [
{
"first": "Chung-Chi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hen-Hsen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.01897"
]
},
"num": null,
"urls": [],
"raw_text": "Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. Fine-grained financial opinion min- ing: A survey and research agenda. arXiv preprint arXiv:2005.01897.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Electra: Pre-training text encoders as discriminators rather than generators",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2003.10555"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep learning for event-driven stock prediction",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Junwen",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2015,
"venue": "Twenty-fourth international joint conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2015. Deep learning for event-driven stock predic- tion. In Twenty-fourth international joint conference on artificial intelligence.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Adversarial sharedprivate model for cross-domain clinical text entailment recognition. Knowledge-Based Systems",
"authors": [
{
"first": "Yuanpei",
"middle": [],
"last": "Hao Fei",
"suffix": ""
},
{
"first": "Bobo",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Donghong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yafeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "221",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Fei, Yuanpei Guo, Bobo Li, Donghong Ji, and Yafeng Ren. 2021. Adversarial shared- private model for cross-domain clinical text en- tailment recognition. Knowledge-Based Systems, 221:106962.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "2020. Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.10964"
]
},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unifying neural learning and symbolic reasoning for spinal medical report generation",
"authors": [
{
"first": "Zhongyi",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Benzheng",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yilong",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Medical Image Analysis",
"volume": "67",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongyi Han, Benzheng Wei, Xiaoming Xi, Bo Chen, Yilong Yin, and Shuo Li. 2021. Unifying neu- ral learning and symbolic reasoning for spinal med- ical report generation. Medical Image Analysis, 67:101872.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Biobert: a pre-trained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2020,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. Biobert: a pre-trained biomed- ical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Universal natural language processing with limited annotations: Try few-shot textual entailment as a start",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Radev",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.02584"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, and Caiming Xiong. 2020. Universal natural language processing with limited annotations: Try few-shot textual entailment as a start. arXiv preprint arXiv:2010.02584.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "An Example of provisions, questions and answers in GBS-QA et al., 2021) and deep neural reasoning",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "Overview of a GBS-QA construction process.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "top 4 words distribution of GBS-QA, FiQA and PhraseBank.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Overview of a BankReg QA model construction process.",
"uris": null
},
"TABREF1": {
"text": "",
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null
}
}
}
}