ACL-OCL / Base_JSON /prefixS /json /starsem /2020.starsem-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:41:11.774276Z"
},
"title": "Fine-tuning BERT with Focus Words for Explanation Regeneration",
"authors": [
{
"first": "Isaiah",
"middle": [],
"last": "Onando Mulang'",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Bonn",
"location": {
"settlement": "Bonn",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "D'souza",
"suffix": "",
"affiliation": {},
"email": "jennifer.dsouza|[email protected]"
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Explanation generation introduced as the WorldTree corpus (Jansen et al., 2018) is an emerging NLP task involving multi-hop inference for explaining the correct answer in multiple-choice QA. It is a challenging task evidenced by low state-of-the-art performances (below 60% in F-score) demonstrated on the task. Of the state-of-the-art approaches, finetuned transformer-based (Vaswani et al., 2017) BERT models have shown great promise toward continued system performance improvements compared with approaches relying on surface-level cues alone that demonstrate performance saturation. In this work, we take a novel direction by addressing a particular linguistic characteristic of the data-we introduce a novel and lightweight focus feature in the transformer-based model and examine task improvements. Our evaluations reveal a significantly positive impact of this lightweight focus feature achieving highest scores, second only to a significantly computationally intensive system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Explanation generation introduced as the WorldTree corpus (Jansen et al., 2018) is an emerging NLP task involving multi-hop inference for explaining the correct answer in multiple-choice QA. It is a challenging task evidenced by low state-of-the-art performances (below 60% in F-score) demonstrated on the task. Of the state-of-the-art approaches, finetuned transformer-based (Vaswani et al., 2017) BERT models have shown great promise toward continued system performance improvements compared with approaches relying on surface-level cues alone that demonstrate performance saturation. In this work, we take a novel direction by addressing a particular linguistic characteristic of the data-we introduce a novel and lightweight focus feature in the transformer-based model and examine task improvements. Our evaluations reveal a significantly positive impact of this lightweight focus feature achieving highest scores, second only to a significantly computationally intensive system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Multi-hop Inference for Explanation Regeneration (MIER) is an emerging task in NLP that concerns aggregating facts to justify the correct answer choice in multiple-choice question answering settings. The WorldTree corpus (Jansen et al., 2018) that introduced this as a community shared task (Jansen and Ustalov, 2019) , was dedicated to finding systems that generate explanations for answers to elementary science questions based on the MIER paradigm.",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 291,
"end": 317,
"text": "(Jansen and Ustalov, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The core task essentially entails two main steps: identification of relevant explanation facts from a given knowledge base, followed by ranking the This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/. a tree is a kind of a plant Which of the following helps the leaves break down after Answer : decomposers they have fallen off the tree ? a leaf is a part of a tree a plant is a kind of a organism if a leaf falls off of a tree then that leaf is dead decomposition is when a decomposer breaks down dead organisms Figure 1 : A elementary science question, its correct answer, and the ordered set of justification facts for the answer in the WorldTree corpus (Jansen et al., 2018) depicted as a subgraph of lexical matches.",
"cite_spans": [
{
"start": 753,
"end": 774,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 609,
"end": 617,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "selected facts as a logically coherent paragraph. Figure 1 shows an example data instance from the WorldTree corpus (Jansen et al., 2018) that defines this task. It is basically a question, its correct answer, and a set of ordered facts that justify the correct answer choice. Depicted in the figure, as a subgraph, is a crucial characteristic feature of the data: that there are lexical overlaps between the question, the correct answer, and the explanation facts. In this respect, however, there are two notable caveats: 1) distractors-the lexical overlaps can also exist with irrelevant facts to the QA. E.g., given the KB fact: a decomposer is usually a bacterium or fungus, it has a lexical match to the answer, but it is not relevant to the explanation. Similarly, at least 13 other such matching irrelevant facts can be found in the WorldTree corpus (2018) knowledge base. And 2) multi-hop inference of valid explanation facts-not all the relevant explanation facts have a direct lexical match to the QA pair, some of the facts are lexically connected to the other valid explanation facts. E.g., the fact a plant is a kind of an organism has no lexical relation to the question or to the answer, but it does to the first explanation fact, hence this entails multihop inference from the QA to the explanation fact to another explanation fact. As such, selecting the set of relevant explanation facts, demands extra effort beyond direct lexical matches with the QA.",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 50,
"end": 58,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In light of these caveats in the data, the task presents itself as a fairly complex inference task, where traditional methods for QA that are based on simple fact matching have proved inadequate (Clark et al., 2013; Jansen et al., 2016) . Given the lexical match characteristic of the data, a slightly adapted application of tf-idf algorithm (Chia et al., 2019) , unsurprisingly demonstrates high performance near that of state-of-theart neural models.",
"cite_spans": [
{
"start": 195,
"end": 215,
"text": "(Clark et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 216,
"end": 236,
"text": "Jansen et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 342,
"end": 361,
"text": "(Chia et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The MIER task defined in the WorldTree corpus (Jansen et al., 2018) was introduced for the first time as a shared task at TextGraph-13 (Jansen and Ustalov, 2019) . The state-of-the-art system (Das et al., 2019 ) employed a fine-tuned BERT-based model in an extended computationally intensive architecture. Generally, the performance of these fine-tuned transformer models depends on how related the data is to the original pretraining data and how best the input representation can be encoded. To this end, in this work, concentrating exclusively on enhancing the lexical match between a question, answer, and explanation, we encode a novel lightweight feature based on the psycholinguistic concept of focus words that has been defined by Brysbaert et al. Loosely, a focus word can be defined as a word which is not too tangible to be experienced directly by the five natural senses (i.e., smell, touch, sight, taste, and hearing), while as well not too abstract (e.g., acquirable) that the meaning may not be illustrated without using other words. From Figure 1 , as an example, the focus words are break down, fall, decompose, organism, dead. Inspired by (Jansen et al., 2017) , we demonstrate for the first time the application of focus words in the context of contemporary neural-based transformer models for the task of explanation generation. We observe that employing focus words in neural-based models enhances the lexical attention capability within transformer-based BERT models and demonstrates an improvement on vanilla BERT models. In fact, among all systems for the task, we obtain the highest scores, second only to the computationally intensive system by Das et al. Thus, our successful application of focus words in elementary science explanation generation demonstrates a poignant application of a vital psycholinguistic feature in the context of a contemporary problem in Artificial Intelligence.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 135,
"end": 161,
"text": "(Jansen and Ustalov, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 192,
"end": 209,
"text": "(Das et al., 2019",
"ref_id": "BIBREF5"
},
{
"start": 1157,
"end": 1178,
"text": "(Jansen et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 1054,
"end": 1062,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our experiments, we examine two main research questions. The first assesses the optimal training experimental setting of the WorldTree corpus (2018). Specifically, RQ1: how does the proportion of negative training examples impact finetuning model performance? The second directly assesses the impact of our focus word feature. RQ2: what is the impact of the novel focus word feature on explanation generation in an optimal fine-tuned model? The rest of the paper is structured as follows. We define our problem in Section 2, followed by a description of the related work in Section 3. Section 4 discusses our approach, with evaluation results presented in Section 5. We conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a question q = {w 1 , w 2 , .., w |q| }, its correct answer a = {w 1 , w 2 , ..., w |a| }, and a set of explanation facts E s.t. every e \u2208 E = {w 1 , w 2 , ..., w |e| } where w i are words \u2208 V for some vocabulary V . Following the definition for the TextGraphs-13 MIER task (Jansen and Ustalov, 2019) , the aim is to obtain, for every question and its correct answer, an ordered list of a set of facts that are coherent in discourse from a knowledge base of facts. By definition, for a question-correct answer pair (q, a), there exists a set of ordered explanation facts R q,a \u2286 E called the relevant set. For each (q, a) pair, the task aims to generate an ordered list of all the explanation facts in the knowledge base E o such that \u2200e o , e \u2208 E : e o \u2208 R q,a \u2227 e / \u2208 R q,a , rank(e o , E o ) < rank(e, E o ). We define, for any given (q, a) pair the ordered list as E o q,a = Reorder({(e k , \u03b3 k ) | e k \u2208 E}) where \u03b3 k is an associated relevance score obtained by predicting a proximity value \u03a6(q, a, e k , \u03b8). The Reorder function therefore ranks the values e k using the proximity score \u03b3 k , where the result is a ranked list of all explanations in which the facts with higher \u03b3 k scores are ranked higher. \u03a6 is a regression function and \u03b8 represents the transformer model hyperparameters.",
"cite_spans": [
{
"start": 280,
"end": 306,
"text": "(Jansen and Ustalov, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2"
},
{
"text": "As alluded to in the Introduction, we induce novel focus word features from both the question and the answer, and the explanation facts. Adapted from Brysbaert et al., we deem as focus words v \u2208 V a word with an annotated psycholinguistic concreteness score between 3.0 and 4.2, i.e. one relegated as somewhere in between an abstract and concrete concept word which is relevant in elementary science since they often discuss phenomenon such as \"evaporation,\" \"dead,\" \"break down,\" etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2"
},
{
"text": "3 Related Work (Jansen et al., 2017) attempted to jointly solve question answering as a consequence of explanation generation. They first identified the question focus words using a predetermined range of psycholinguistic concreteness scores (Brysbaert et al., 2014) . Then they generate answer justifications by aggregating multiple facts from external knowledge sources (via constructs called text aggregation graphs). We leverage these concreteness scores, specifically the range between 3.0 and 4.2 that define focus words, as feature labels for the elementary science QA focus words in a transformer BERT model.",
"cite_spans": [
{
"start": 15,
"end": 36,
"text": "(Jansen et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 242,
"end": 266,
"text": "(Brysbaert et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2"
},
{
"text": "The TextGraph-13 MIER Shared Task saw two flavors of approaches to the task. The traditional approach of using hand-crafted linguistic features in an SVM ranker (D'Souza et al., 2019) and with reranking rules to correct obvious prediction errors. And, on the other hand, the most recent BERTbased rerankers over heuristically ranked data in a first stage. Banerjee tested initial ranking using two different transformer models: BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019) , and observe that including parts of gold explanations with question text when training for relevance as additional context offers performance improvement. Their approach included reranking the top 15 ranked facts via cosine similarity. Chia et al. explore an iterative tf-idf to recursively refine the results and achieve significant improvements on a baseline non-optimized tf-idf. In addition, they employ the results of this process in a BERT-based re-ranker to rank the top 64 candidates. The top-ranked system by Das et al. used fine-tuned BERT both for the initial step and the reranking. Where the first BERT model is fine-tuned on the whole set of facts in the knowledge base, the second BERT model is fine-tuned as a path ranking model. In this latter case, a BERT model is trained with chains of valid multi-hop facts from the top 25 candidates. Computing chains of multi-hop facts was a brute-force computationally exhaustive process which is not practically viable as noted by the authors. We also include results for a BERT model trained purely on just the focus words of the question/answer pairs, and the explanations. This model obtains no signals at all from the data, an indication that focus words are best used as extra signals to the data as opposed to being utilized as standalone data by themselves.",
"cite_spans": [
{
"start": 161,
"end": 183,
"text": "(D'Souza et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 433,
"end": 454,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 465,
"end": 484,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2"
},
{
"text": "Our approach is illustrated in Figure 2 and is described next.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "4"
},
{
"text": "Word concreteness ratings coming from research in psycholinguistics (Brysbaert et al., 2014) forms a good source of information to identify whether a word in a sentence reflects an abstract or a concrete real-world concept. In prior work, Jansen et al. were the first to employ the word concreteness scores to identify focus words in elementary science QA as a linking signal with relevant explanation facts. In their work, the focus words were employed to help aggregate related explanation facts, whereby the identified focus words were considered highly relevant in finding the answer to a question, hence significant for connecting justification sentences together. We borrow this insight and apply concreteness scores during finetuning BERT which we employ as a reranker (described in the next subsection). The raw annotated data with the concreteness scores is a list of 40,000 lemmas from common English. 1 As mentioned earlier, focus words are those with concreteness scores between 3.0 and 4.2 a range defined in (Berant and Liang, 2014) which reflect the degree to which a word is a focus word with abstract words and concrete words being on the extreme ends of the words spectrum. In the context of our problem domain, i.e. elementary science, we have identified that the most relevant content terms fall in the conceptual spectrum of focus words. For example the focus words measure/measurement, eat/eating, evaporate/evaporation are words that describe the relevant concepts in elementary science.",
"cite_spans": [
{
"start": 68,
"end": 92,
"text": "(Brysbaert et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Novel Focus Words Feature",
"sec_num": "4.1"
},
{
"text": "We preprocess the text using the spaCy 2 NLP toolkit for tokenization and lemmatization before retrieving concreteness scores (Brysbaert et al., 2014) for the words from the dictionary.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Brysbaert et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Novel Focus Words Feature",
"sec_num": "4.1"
},
{
"text": "Words We utilize the pretrained BERT (Devlin et al., 2018) model and fine tune it on the sentence pair scoring task with a regression function to obtain This is then used as input to the model which learns representations for both the (q, a) and explanation fact text fragments, and the focus word tokens. Our model architecture in Figure 2 depicts how the input is handled at the embeddings layer. For instance, to obtain a representation for a focus word at position i in the input, from the (q, a) side: the word embedding E qa i -layer 3, segment embedding E QA -layer 2, and the position embedding E pos i -layer 1, are summed up into a single embedding vector. This output is then passed to the bidirectional transformer layer and finally through a regression layer to produce the score \u03b3 for the input explanation fact. Finally, all facts are sorted by \u03b3 scores in descending order.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 332,
"end": 340,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Finetuning BERT Ranker with Focus",
"sec_num": "4.2"
},
{
"text": "Our BERT model is initialized using publicly available weights from the pretrained BERT BASE model available in the Python package Pytorch-Transformers 3 . We use the default learning rate of 2e-5, a batch size of 32 and maximum sequence length of 512. The batch size and sequence length are unchanged for training and testing. The model was fine-tuned for 3 epochs using the Adam optimizer (Kingma and Ba, 2014).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.2.1"
},
{
"text": "To develop and evaluate our approach, we use the TextGraphs-13 MIER Shared Task (Jansen and Ustalov, 2019) dataset and evaluation scripts, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Results",
"sec_num": "5"
},
{
"text": "Dataset. The TextGraph-13 MIER task used the WorldTree corpus (Jansen et al., 2018) consisting of 1,190, 264, and 1,247 training, development, and test set QA instances additionally annotated with explanations, comprising anywhere between 1 to 23 facts. The QA part of the dataset is a multiplechoice dataset, therefore, each question has upto 5 answer choices of which the correct answer is already known. A set of 4,789 candidate facts was additionally provided as the knowledge base.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "(Jansen et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Evaluation Metrics. The shared task evaluation script employed the mean Average Precision (mAP ) metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "To address RQ1, we perform experiments with different numbers of negative examples in the training set, starting with the whole dataset containing \u223c4,770 negative explanation facts per (q, a).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5.2"
},
{
"text": "Test BERT Re-ranker + inference chains (Das et al., 2019) 58.5 56.3 BERT Re-ranker + Iterated TF-IDF (Chia et al., 2019) 50.9 47.7 Iterated TF-IDF (Chia et al., 2019) 49.7 45.8 Optimized TF-IDF (Chia et al., 2019) 45.8 42.7 BERT iterative re-ranker (Banerjee, 2019) 42.3 41.3 Rules + Feature-rich SVM Rank (D'Souza et al., 2019) 44.4 39.4 Generic Feature-rich SVM Rank (D'Souza et al., 2019) 37.1 34.1 TF-IDF Baseline + SVM Rank (Jansen Note, by the whole dataset, we mean all the explanation facts in the knowledge base that are not annotated as valid facts for a given (q, a) instance. Table 2 shows that too many negative examples for training had a negative impact. The configuration with (\u223c4770) refers to the Vanilla BERT model trained on each question-answer (q, a) paired with all the explanation facts. We reached an equilibrium between 600 and 900 negative explanation facts per (q, a). Thus, RQ1 investigated obtaining an optimally trained model given the WorldTree corpus (2018) as input which we found at 900 explanation facts. Table 1 shows the performance of our optimally trained BERT model with and without focus words for the MIER task. Addressing RQ2, we find that the focus tokens induces a performance improvement above 1% mAP . Overall, our model outperforms eight of the nine reference systems. It is second only to a more computationally intensive model (Das et al., 2019) where comparatively ours is significantly simpler, thereby practically viable. Separately, a model trained only on focus words from the (q, a) and explanation facts, themselves, do not provide any substantial signals to train a useful model (see row \"BERT + Pure Focus Words + Optimised Neg Facts\" row). This affirms that the original sentence provide necessary training signals which can be further accentuated with the focus word features.",
"cite_spans": [
{
"start": 39,
"end": 57,
"text": "(Das et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 101,
"end": 120,
"text": "(Chia et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 147,
"end": 166,
"text": "(Chia et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 194,
"end": 213,
"text": "(Chia et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 249,
"end": 265,
"text": "(Banerjee, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 301,
"end": 328,
"text": "Rank (D'Souza et al., 2019)",
"ref_id": null
},
{
"start": 364,
"end": 391,
"text": "Rank (D'Souza et al., 2019)",
"ref_id": null
},
{
"start": 1378,
"end": 1396,
"text": "(Das et al., 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 429,
"end": 436,
"text": "(Jansen",
"ref_id": null
},
{
"start": 588,
"end": 595,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1041,
"end": 1048,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach mAP Dev",
"sec_num": null
},
{
"text": "In this paper, we empirically determine that the number of negative examples in training has an impact on the fine-tuning process for the explanation regeneration task. We have presented a lightweight, nonetheless, effective solution to the problem of explanation regeneration in elementary science QA. Staying on course with the current trend of investigating neural models, we implement a BERT-based model with an additional linguistic focus words feature. Thereby with our new feature we tap deeper into the nature of data in terms of its linguistic match characteristic. We obtained a considerable improvement in task performance. Subsequently, since focus words have proven effective in our experiments, by their nature we hypothesize that attention-based models are a promising future direction for this task. In this work, we developed our system based on the MIER TextGraph-13 Shared Task (Jansen and Ustalov, 2019) definition for explanation generation, in the context of which we utilize only the correct answer for ranking explanation facts. Toward an end-to-end model, as a first step, we plan to explore the training of an optimal model with all answer choices; following which, we plan to jointly model the question answering process together with explanation generation in a feedback loop such that both tasks mutually improve each other (Pirtoaca et al., 2019), except we will test our system for elementary science QA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://crr.ugent.be/archives/1330 2 Available from https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Accessible athttps://github.com/ huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Asu at textgraphs 2019 shared task: Explanation regeneration using language models and iterative re-ranking",
"authors": [
{
"first": "Pratyay",
"middle": [],
"last": "Banerjee",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "78--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pratyay Banerjee. 2019. Asu at textgraphs 2019 shared task: Explanation regeneration using language mod- els and iterative re-ranking. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13), pages 78-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantic parsing via paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1415--1425",
"other_ids": {
"DOI": [
"10.3115/v1/P14-1133"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant and Percy Liang. 2014. Semantic pars- ing via paraphrasing. In Proceedings of the 52nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415- 1425, Baltimore, Maryland. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods",
"authors": [
{
"first": "M",
"middle": [],
"last": "Brysbaert",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Warriner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kuperman",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "46",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Brysbaert, AB Warriner, and V Kuperman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Red dragon AI at TextGraphs 2019 shared task: Language model assisted explanation generation",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Yew",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Chia",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Witteveen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Andrews",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "85--89",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5311"
]
},
"num": null,
"urls": [],
"raw_text": "Yew Ken Chia, Sam Witteveen, and Martin Andrews. 2019. Red dragon AI at TextGraphs 2019 shared task: Language model assisted explanation genera- tion. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Pro- cessing (TextGraphs-13), pages 85-89, Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A study of the knowledge base requirements for passing an elementary science test",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 workshop on Automated knowledge base construction",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Clark, Philip Harrison, and Niranjan Balasubra- manian. 2013. A study of the knowledge base re- quirements for passing an elementary science test. In Proceedings of the 2013 workshop on Automated knowledge base construction, pages 37-42. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Chains-of-reasoning at TextGraphs 2019 shared task: Reasoning over chains of facts for explainable multi-hop inference",
"authors": [
{
"first": "Rajarshi",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ameya",
"middle": [],
"last": "Godbole",
"suffix": ""
},
{
"first": "Manzil",
"middle": [],
"last": "Zaheer",
"suffix": ""
},
{
"first": "Shehzaad",
"middle": [],
"last": "Dhuliawala",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "101--117",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5313"
]
},
"num": null,
"urls": [],
"raw_text": "Rajarshi Das, Ameya Godbole, Manzil Zaheer, She- hzaad Dhuliawala, and Andrew McCallum. 2019. Chains-of-reasoning at TextGraphs 2019 shared task: Reasoning over chains of facts for explainable multi-hop inference. In Proceedings of the Thir- teenth Workshop on Graph-Based Methods for Nat- ural Language Processing (TextGraphs-13), pages 101-117, Hong Kong. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Team SVMrank: Leveraging featurerich support vector machines for ranking explanations to elementary science questions",
"authors": [
{
"first": "D'",
"middle": [],
"last": "Jennifer",
"suffix": ""
},
{
"first": "Isaiah",
"middle": [
"Onando"
],
"last": "Souza",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "Mulang",
"suffix": ""
},
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "90--100",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5312"
]
},
"num": null,
"urls": [],
"raw_text": "Jennifer D'Souza, Isaiah Onando Mulang', and S\u00f6ren Auer. 2019. Team SVMrank: Leveraging feature- rich support vector machines for ranking explana- tions to elementary science questions. In Pro- ceedings of the Thirteenth Workshop on Graph- Based Methods for Natural Language Processing (TextGraphs-13), pages 90-100, Hong Kong. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "What's in an explanation? characterizing knowledge and inference requirements for elementary science exams",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Niranjan",
"middle": [],
"last": "Balasubramanian",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2956--2965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen, Niranjan Balasubramanian, Mihai Sur- deanu, and Peter Clark. 2016. What's in an expla- nation? characterizing knowledge and inference re- quirements for elementary science exams. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 2956-2965, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Framing qa as building and ranking intersentence answer justifications",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Sharp",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "2",
"pages": "407--449",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen, Rebecca Sharp, Mihai Surdeanu, and Pe- ter Clark. 2017. Framing qa as building and ranking intersentence answer justifications. Computational Linguistics, 43(2):407-449.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TextGraphs 2019 Shared Task on Multi-Hop Inference for Explanation Regeneration",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Ustalov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen and Dmitry Ustalov. 2019. TextGraphs 2019 Shared Task on Multi-Hop Inference for Ex- planation Regeneration. In Proceedings of the Thir- teenth Workshop on Graph-Based Methods for Nat- ural Language Processing (TextGraphs-13), Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Worldtree: A corpus of explanation graphs for elementary science questions supporting multi-hop inference",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Wainwright",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Marmorstein",
"suffix": ""
},
{
"first": "Clayton",
"middle": [
"T"
],
"last": "Morrison",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen, Elizabeth Wainwright, Steven Mar- morstein, and Clayton T. Morrison. 2018. Worldtree: A corpus of explanation graphs for elementary science questions supporting multi-hop inference. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "the 3rd International Conference for Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. Cite arxiv:1412.6980Comment: Published as a confer- ence paper at the 3rd International Conference for Learning Representations, San Diego, 2015.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Answering questions by learning to rank-learning to rank by answering questions",
"authors": [
{
"first": "George",
"middle": [],
"last": "Sebastian Pirtoaca",
"suffix": ""
},
{
"first": "Traian",
"middle": [],
"last": "Rebedea",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ruseti",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2531--2540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Sebastian Pirtoaca, Traian Rebedea, and Ste- fan Ruseti. 2019. Answering questions by learning to rank-learning to rank by answering questions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2531- 2540.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "of the following helps the leaves break down after they have fallen"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Fine-tuning of Transformers with Focus Word Features. Approach: The q, a are paired with each explanation in the set. This is then passed through the Focus word extractor that identifies the focus words based on concreteness scores. Input representation layers: 1) Position Embeddings; 2) Segment Embeddings; and 3) Word Embeddings.ranking scores. Our input to the BERT model is encoded as follows: the special [CLS] token is appended to the beginning of every data instance; a special token [SEP ] is used to separate the (q, a) pair from the explanation fact and is appended to the end of the explanation fact as well; additionally, to encode our focus word feature, we introduce a new special token [F OC]. Focus words are identified from the (q, a) and explanation facts, and they are listed following the text with the [F OC] separator. As an example of our input, consider[CLS] Which of the following helps leaves break down after they have fallen off a tree decomposers [FOC] break fall decompose [SEP] decomposition is when a decomposer breaks down dead organisms [FOC] decomposition decompose break down organism [SEP]."
},
"TABREF1": {
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null,
"text": "Mean Average Precision (mAP) percentage scores of finetuning BERT over varying negative training examples"
}
}
}
}