|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T16:21:51.294755Z" |
|
}, |
|
"title": "Enhancing Question Answering by Injecting Ontological Knowledge through Regularization", |
|
"authors": [ |
|
{ |
|
"first": "Travis", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Goodwin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "U.S. National Library of Medicine National Institutes of Health", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Dina", |
|
"middle": [], |
|
"last": "Demner-Fushman", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "U.S. National Library of Medicine National Institutes of Health", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Deep neural networks have demonstrated high performance on many natural language processing (NLP) tasks that can be answered directly from text, and have struggled to solve NLP tasks requiring external (e.g., world) knowledge. In this paper, we present OSCR (Ontology-based Semantic Composition Regularization), a method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during pre-training. We evaluated the performance of BERT pretrained on Wikipedia with and without OSCR by measuring the performance when finetuning on two question answering tasks involving world knowledge and causal reasoning and one requiring domain (healthcare) knowledge and obtained 33.3 %, 18.6 %, and 4 % improved accuracy compared to pre-training BERT without OSCR.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Deep neural networks have demonstrated high performance on many natural language processing (NLP) tasks that can be answered directly from text, and have struggled to solve NLP tasks requiring external (e.g., world) knowledge. In this paper, we present OSCR (Ontology-based Semantic Composition Regularization), a method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during pre-training. We evaluated the performance of BERT pretrained on Wikipedia with and without OSCR by measuring the performance when finetuning on two question answering tasks involving world knowledge and causal reasoning and one requiring domain (healthcare) knowledge and obtained 33.3 %, 18.6 %, and 4 % improved accuracy compared to pre-training BERT without OSCR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "\"The detective flashed his badge to the police officer.\" The nearly effortless ease at which we, as humans, can understand this simple statement belies the depth of semantic knowledge needed for its understanding: What is a detective? What is a police officer? What is a badge? What does it mean to flash a badge? Why would the detective need to flash his badge to the police officer? Understanding this sentence requires knowing the answer to all these questions and relies on the reader's knowledge about this world: a detective investigates crime, police officers restrict access to the crime scene, and a badge can be a symbol of authority.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Problem", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As shown in Figure 1 , suppose we were interested in determining whether, upon showing the policeman his badge, it is more plausible that the detective would be let into the crime scene or that the police officer would confiscate the detective's badge? To answer this question, we would need Premise: The detective flashed his badge to the police officer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 20, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Problem", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A: The police officer confiscated the detective's badge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Question: What is the most likely effect?", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The police officer let the detective enter the crime scene. to leverage our accumulated expectations about the world: although both scenarios are certainly possible, our accumulated expectations about the world suggest it would be very extraordinary for the police officer to confiscate the detective's badge rather than allow him to enter the crime scene.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Evidence of Grice's Maxim of Quantity (Grice, 1975) , this shared knowledge of the world is rarely explicitly stated in text. Fortunately, some of this knowledge can be extracted from Ontologies and knowledge bases. For example ConceptNet (Speer et al., 2017) indicates that a detective is a T O police officer, and is C O finding evidence; that evidence can be L A a crime scene; and that a badge is a T O authority symbol.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 51, |
|
"text": "(Grice, 1975)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 239, |
|
"end": 259, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While neural networks have been shown to obtain state-of-the-art performance on many types of question answering and reasoning tasks from raw data (Devlin et al., 2018; Rajpurkar et al., 2016; Manning, 2015) , there has been less investigation into how to inject ontological knowledge into deep learning models, with most prior attempts embedding ontological information outside of the network itself (Wang et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 168, |
|
"text": "(Devlin et al., 2018;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 169, |
|
"end": 192, |
|
"text": "Rajpurkar et al., 2016;", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 193, |
|
"end": 207, |
|
"text": "Manning, 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 401, |
|
"end": 420, |
|
"text": "(Wang et al., 2017)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we present a pre-training regular-ization technique we call OSCR (Ontology-based Semantic Composition Regularization), which is capable of injecting world knowledge and ontological relationships into a deep neural network. We show that incorporating OSCR into BERT's pre-training injects sufficient world knowledge to improve fine-tuned performance in three question answering datasets. The main contributions of this work are: 1. OSCR, a regularization method for injecting ontological information and semantic composition into deep learning models; 2. Empirical evidence showing the impact of OSCR on two tasks requiring world knowledge, causal reasoning, and discourse understanding even with as few as 500 training example, as well as a task requiring medical domain knowledge; and 3. Experimental results showing that the same technique used to infer background knowledge about the world can also capture domainspecific knowledge in the case of medical question answering; and 4. An open-source implementation of OSCR and BERT supporting mixed-precision training, non-TPU model distribution, and enhanced numerical stability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The idea of training a model on a related problem before training on the problem of interest has been shown to be effective for many natural language processing tasks (Dai and Le, 2015; Peters et al., 2017; Howard and Ruder, 2018) . More recent uses of pre-training adapt transfer learning by first training a network on a language modeling task and then fine-tuning (retraining) that model for a supervised problem of interest (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018) . Pre-training, in this way, has the advantage that the model can build on previous parameters to reduce the amount of information it needs to learn for a specific downstream task. Conceptually, the model can be viewed as applying what it has already learned from the language model task when learning the downstream task. BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained neural network that has been shown to obtain state-of-the-art results on eleven natural language processing tasks after fine-tuning (Devlin et al., 2018) . BERT relies on two pre-training objectives: (1) a variant of language modeling called Cloze (originally proposed in Taylor 1953) where-in 20 % of the words in a sentence are masked, and the model must unmask them and (2) a next sentence prediction task where-in the model is given two pairs of sentences and must decide if the second sentence immediately follows the first. Despite its strong empirical performance, the architecture of BERT is relatively simple: four layers of transformers (Vaswani et al., 2017) are stacked to process each sentence.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 185, |
|
"text": "(Dai and Le, 2015;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 186, |
|
"end": 206, |
|
"text": "Peters et al., 2017;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 207, |
|
"end": 230, |
|
"text": "Howard and Ruder, 2018)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 447, |
|
"end": 470, |
|
"text": "Howard and Ruder, 2018;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 492, |
|
"text": "Radford et al., 2018)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1028, |
|
"end": 1049, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1168, |
|
"end": 1180, |
|
"text": "Taylor 1953)", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 1543, |
|
"end": 1565, |
|
"text": "(Vaswani et al., 2017)", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In terms of injecting knowledge into pre-training, explored injecting entity information into BERT using multi-head attention. However, their approach requires explicitly indicating entity boundaries or relation constituents with special input tokens for down-stream fine-tuning. By contrast, OSCR requires no modification of input formats in the host network. explored modifying BERT's pre-training by masking entire entities and phrases extracted from external knowledge. Meanwhile, Xie et al. (2019) explored projecting propositional knowledge using Graph Convolutional Networks (GCNs). OSCR, instead, introduces a regularization term that can be added to any natural language pre-training objectives, without modifying the architecture of the network or the pre-training objectives themselves.", |
|
"cite_spans": [ |
|
{ |
|
"start": 485, |
|
"end": 502, |
|
"text": "Xie et al. (2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background and Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Incorporating OSCR into pre-training requires an embedded ontology (or knowledge) graph, and one or more natural language pre-training objectives to regularize -in our case, BERT's Cloze and nextsentence prediction tasks. These objectives, in turn, require a document collection.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Data", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "ConceptNet 5 is a semantic network containing relational knowledge contributed to Open Mind Common Sense (Singh et al., 2002) and to DB-Pedia (Auer et al., 2007) , as well as dictionary knowledge from Wiktionary, the Open Multilingual WordNet (Singh et al., 2002; Miller, 1995) , the high-level ontology from OpenCyc , and knowledge about word associations from \"Games with a Purpose\" (von Ahn, 2006) . In our experiments we used ConceptNet 5 as our ontology relying on an embedded representation of the ontology known as ConceptNet NumberBatch (Speer et al., 2017) , in which embeddings for all entities in ConceptNet were built using an ensemble of (a) data from Con-ceptNet, (b) word2vec (Mikolov et al., 2013) , (c) GloVe (Pennington et al., 2014) , and (d) OpenSubtitles 2016 using retrofitting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 125, |
|
"text": "(Singh et al., 2002)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 161, |
|
"text": "(Auer et al., 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 243, |
|
"end": 263, |
|
"text": "(Singh et al., 2002;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 277, |
|
"text": "Miller, 1995)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 400, |
|
"text": "(von Ahn, 2006)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 565, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 691, |
|
"end": 713, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 751, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Ontology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Our text corpus was a 2019 dump of English Wikipedia articles with templates expanded as provided by Wikipedia's Cirrus search engine . Preprocessing relied on NLTK's Punkt sentence segmenter (Loper and Bird, 2002) , and the WordPiece subword tokenizer provided with BERT.", |
|
"cite_spans": [ |
|
{ |
|
"start": 192, |
|
"end": 214, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Documents", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Virtually all neural networks designed for natural language processing represent language as a sequence of words, subwords, or characters. By contrast, Ontologies and knowledge bases encode semantic information about entities, which may correspond to individual nouns (e.g., \"badge\") or multiword phrases (\"police officer\"). Consequently, injecting world and domain knowledge from a knowledge base into the network requires semantically decomposing the information about an entity into the supporting information about its constituent words. For example, injecting the semantics of \"Spanish Civil War\" into the network requires learning what information the word \"Spanish\" introduces to the nominal \"Civil War\" and what information \"Civil\" adds to the word \"War\". To do this, OSCR is implemented using a three-step approach illustrated in Figure 2 :", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 839, |
|
"end": 847, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Step 1. entities are recognized in a sentence using a Finite State Transducer (FST); Step 2. the sequence of subwords corresponding to each entity are semantically composed to produce an entity-level encoding; and Step 3. the average energy between the composed entity encoding and the pre-trained entity encoding from the ontology is used as a regularization term in the pre-training loss function. By training the model to compose sequences of subwords into entities, during back-propagation, the semantics of each entity are decomposed and http://opus.nlpl.eu/OpenSubtitles-v2016. php https://www.mediawiki.org/wiki/Help: CirrusSearch https://www.nltk.org/_modules/nltk/ tokenize/punkt.html injected into the network based on the neural activations associated with its constituent words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We designed OSCR to require as few modifications to the underlying host network (e.g., BERT) as possible. We recognized entities during training and inference online by (1) tokenizing each entity in our ontology using the same tokenizer used to prepare the BERT pre-training data, and (2) compiling a Finite State Transducer to detect sequences of subword IDs corresponding to entities. The FST, illustrated in Figure 3 , allowed us to detect entities on-the-fly without hard coding a specific ontology and without inducing any discernible change in training or inference time. Although we did not explore it in this work, this potentially allows for multiple ontologies to be injected through OSCR during pre-training. In these experiments, due to the simplicity of ConceptNet entities, we relied on exact string matching to detect entities. Formally, let", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 419, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "= 1 , 2 , \u2022 \u2022 \u2022 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "represent the sequence of words in a sentence. The FST processes and returns three sequences:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "1 , 2 , \u2022 \u2022 \u2022 , ; 1 , 2 , \u2022 \u2022 \u2022 , ; and 1 , 2 , \u2022 \u2022 \u2022 ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "representing the start offset, length, and the pretrained embedded representation of every mention of any entity in the Ontology.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Entity Subsumption. When detecting entities, it is often the case that multiple entities may correspond to the same span of text. As illustrated in Figure 2 , the entity \"Spanish Civil War\" contains the subsumed entities \"Spanish\", \"Civil War\", \"Civil\", and \"War\". Likewise, because BERT masks 20 % of the words in each sentence, it is possible for entities to involve masked words. Note: including or excluding subsumed and de-masked entities (as illustrated in Figure 2 ) provided no discernible effect in our experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 156, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 463, |
|
"end": 471, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Entity Demasking. Because BERT masks tokens when pre-training, we evaluated the impact of (a) de-masking words before detecting entities and (b) ignoring all entity mentions involving masked words. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Entity Detection", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u22ef 1 = 1 2 = 2 3 = 5 4 = 5 , 6 , 7 Semantic Composition ( \u00a74.2) = 1 \ufffd =1 , Energy Regularization ( \u00a74.3) FST NumberBatch \u22ee 1 \u22ee \u22ee ConceptNet British ( 1 = 1; 1 = 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Spanish Civil War ( 4 = 5; 4 = 3) War ( 7 =7; 7 =1) \u2020 Nonintervention ( 10 = 12; 10 = 2) \u2021 Policy ( 2 = 2; 2 = 1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Civil War ( 5 = 6; 5 = 2) \u2020 Officially ( 8 = 9; 8 = 1) \u2020 Intervention ( 11 = 13; 11 = 1) \u2020 Spanish ( 3 = 5; l 3 = 1) \u2020 Civil ( 6 = 6; 6 = 1) \u2020 Non ( 9 = 12; 9 = 1) \u2020 \u2021 \u22ee Entity Detection ( \u00a74.1) Figure 2 : Architecture of OSCR when injecting ontology knowledge from ConceptNet into BERT where ' \u2020' indicates subsumed entities, ' \u2021' indicates de-masked entities, is the length of the input sentence, is the number of entities detected in the sentence, and is the number of entities with embeddings in ConceptNet. BERT is computationally expensive, we considered three computationally-efficient methods for composing words and subwords into entities.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 195, |
|
"end": 203, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Recurrent Additive Networks (RANs) are a simplified alternative to LSTM-or GRU-based recurrent neural networks that use only additive connections between successive layers and have been shown to obtain similar performance with 38% fewer learnable parameters (Lee et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 276, |
|
"text": "(Lee et al., 2017)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Given a sequence of words 1 , 2 , \u2022 \u2022 \u2022 , we use the following layers to accumulate information about how the semantics of each word in an entity contribute to the overall semantics of the entity:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= (1a) = ( [ \u22121 , ] + ) (1b) = [ \u22121 , ] + (1c) = \u2022 + \u2022 \u22121 (1d) = ( )", |
|
"eq_num": "(1e)" |
|
} |
|
], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "where [\u2022] represents vector concatenation, represents the content layer which encodes any new semantic information provided by word , \u2022 indicates an element-wise product, represents the input gate, represents the forget gate, represents the internal memories about the entity, and is the output layer encoding accumulated semantics about word . We define the composed entity + (i.e., the content vector of the RAN after processing the last token in the entity) for the sequence beginning with .", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 9, |
|
"text": "[\u2022]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Composition", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "To further reduce model complexity, we considered a second, simpler version of a RAN omits the content and output layers (i.e., Equations 1a and 1e) and Equation 1d is updated to depend on directly: = \u2022 + \u2022 \u22121 . As above, we define the composed entity + for the sequence of subwords beginning with .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Recurrent Additive Networks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Linear Interpolation Finally, we considered a third, even simpler form of semantic composition. Inspired by Goodwin and Harabagiu (2016) , we represented the semantics of an entity as an unordered linear combination of the semantics of its constituent words, i.e.:", |
|
"cite_spans": [ |
|
{ |
|
"start": 108, |
|
"end": 136, |
|
"text": "Goodwin and Harabagiu (2016)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Recurrent Additive Networks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "+ +1 + \u2022 \u2022 \u2022 + + + \u2022 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Linear Recurrent Additive Networks", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We project the composed entities into the same vector space as the pretrained entity embeddings from the Ontology, and measure the average energy across all entities detected in the sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Energy Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "R OSCR = 1 =1 + ,", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Energy Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "where is an energy function capturing the energy between the composed entity and the pretrained entity embedding . We considered three energy functions: (1) the Euclidean distance, (2) the absolute distance, and (3) the angular distance, which can handle negative values.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Energy Regularization", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Hyper-parameter Tuning For each fine-tuning task, we used a greedy approach to hyper-parameter tuning by incrementally and independently optimizing: batch size \u2208 {8, 16, 32}; initial learning rate \u2208 1 \u00d7 10 \u22125 , 2 \u00d7 10 \u22125 , 3 \u00d7 10 \u22125 ; whether to include subsumed entities \u2208 {yes, no}; and whether to include masked entities \u2208 {yes, no}. For CoPA, the Story Cloze task, and RQE, we found an optimal batch size of 16 and an optimal learning rate of 2 \u00d7 10 \u22125 . We also found that including subsumed entities and masked was optimal (at a net performance improvement of < 1% accuracy).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We pretrained BERT using a 2019 Wikipedia dump formatted for Wikipedia's Cirrus search engine. Preprocessing relied on NLTK's Punkt sentence segmenter (Loper and Bird, 2002) , and the WordPiece subword tokenizer provided with BERT. We used the vocabulary from BERT base (not large) and a maximum sequence size of 384 subwords, training 64 000 steps, with an initial learning rate of 2 \u00d7 10 \u22125 , and 320 warm-up steps.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 173, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Pretraining", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used a modified version of BERT, allowing for mixed-precision training. This necessitated a number of minor changes to improve numerical stability around softmax operations. Training was performed using a single node with 4 Tesla P100s each (multiple variants of OSCAR were trained simultaneously using five such nodes at a time). Non-TPU multi-GPU support was added to BERT based on Horovod and relying on Open MPI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "BERT Modifications", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We evaluated the impact of OSCR on three question answering tasks requiring world or domain knowledge and causal reasoning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Choice of Plausible Alternatives a SemEval 2012 shared task, (CoPA) presents 500 training and 500 testing sets of two-choice questions and https://www.mediawiki.org/wiki/Help: CirrusSearch https://www.nltk.org/_modules/nltk/ tokenize/punkt.html https://eng.uber.com/horovod/ Premise: Gina misplaced her phone at her grandparents. It wasn't anywhere in the living room. She realized she was in the car before. She grabbed her dad's keys and ran outside.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Ending A: She found her phone in the car.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Ending B: She didn't want her phone anymore. Consumer Health Question: Can sepsis be prevented. Can someone get this from a hospital?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "FAQ A: Who gets sepsis?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "FAQ B: What is the economic cost of sepsis? requires to choose the most plausible cause or effect entailed by the premise, as illustrated in Figure 1 ( Roemmele et al., 2011) . The topics of these questions were drawn from two sources: (1) personal stories taken from a collection of blogs (Gordon and Swanson, 2009) ; and (2) subject terms from the Library of Congress Thesaurus for Graphic Materials, while the incorrect alternatives were created so as to penalize \"purely associative methods\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 174, |
|
"text": "Roemmele et al., 2011)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 290, |
|
"end": 316, |
|
"text": "(Gordon and Swanson, 2009)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 149, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The Story Cloze Test evaluates story understanding, story generation, and script learning and requires a system to choose the correct ending to a four-sentence story, as illustrated in Figure 4 ( Mostafazadeh et al., 2016) . In our experiments, we used only the 3,744 labeled stories. Table 1 presents the results of BERT when pretrained on Wikipedia with and without OSCR, the state-of-the-art, and the average performance of different semantic composition methods and energy functions when calculating OSCR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 196, |
|
"end": 222, |
|
"text": "Mostafazadeh et al., 2016)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 185, |
|
"end": 193, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 292, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "It is clear from Table 1 that incorporating OSCR provided a significant improvement in accuracy for both common sense causal reasoning tasks, indicating that OSCR was able to inject useful world knowledge into the network. We also evaluated the impact of OSCR on the Stanford Question Answering Dataset (SQuAD), version 1.1, and observed no discernable change in performance (an Accuracy of 86.6 % without and 86.5 % with OSCR). The lack of impact of SQuAD is unsurprising, as the vast majority of SQuAD questions can be answered directly by surface-level information in the text, but it shows that injecting world knowledge with OSCR does not come at the expense of model performance for tasks that require little outside knowledge.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 24, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Impact of External Knowledge.", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "While less pronounced than the general domain, for the clinical domain, OSCR provided a modest improvement over standard BERT, and both improved over the state-of-the-art.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Impact of Domain Knowledge.", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We evaluated the impact of including subsumed entities when calculating OSCR and found it provided, on average, only a minor increase in accuracy (< 1 % average relative improvement) at a 10 % increase in total training time. Consequently, we recommend ignoring all subsumed entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Impact of Entity Masking Entity Subsumption", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Entity De-masking De-masking entities had little over-all impact on model performance (< 1% average relative improvement) and no discernible effect on training time. This may be explained by the fact that Wikipedia sentences are typically much longer than standard English sentences, so the likelihood of an important entity being masked is relatively small.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Impact of Entity Masking Entity Subsumption", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "When comparing semantic composition methods, the Linear method had the most consistent performance across both domains; the Recurrent Additive Network (RAN) obtained the lowest performance on the general domain and the highest performance on medical texts, while the Linear RAN exhibited the opposite behavior. While this suggests more complex domains require more complex representations of semantic composition, we recommend Linear composition as it exhibits consistent performance and requires 50% less training time than the RAN and 40% less than the Linear RAN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Role of Semantic Composition", |
|
"sec_num": "6.4" |
|
}, |
|
{ |
|
"text": "In terms of energy functions, the Euclidean distance was the most consistent, the Angular distance was the best for the Story Cloze and RQE tasks, and the Absolute difference was the best for CoPA. The Angular distance (being scale-invariant) is least affected by the number of subwords constituting an entity while the Absolute distance is most affected. Consequently, we believe the Absolute distance was only effective on the CoPA evaluation because the entities in CoPA are typically very short (single words or subwords). We recommend selecting the energy function based on the average length of entities in the fine-tuning tasks: Angular distance with long entities, Absolute distance with short entities, and Euclidean distance with varied entities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Impact of the Energy Functions", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "Finally, we compared the impact of including and excluding subsumed and masked entities and found that neither resulted in any substantial change in model improvements (< 1 % change in accuracy), while ignored masked and subsumed entities lead to a 20 % average reduction in training time.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Impact of the Energy Functions", |
|
"sec_num": "6.5" |
|
}, |
|
{ |
|
"text": "In this study, we only considered ConceptNet as our ontology because we were primarily interested in in-jecting common-sense world knowledge. However, OSCR is not specific to any Ontology. Likewise, we considered only one type of pretrained entity embeddings: ConceptNet NumberBatch (Speer et al., 2017) , despite the availability of other, more sophisticated approaches for knowledge graph embedding including, TransE (Bordes et al., 2013) , TranR (Lin et al., 2015) , TransH (Wang et al., 2014) , RESCAL (Nickel et al., 2011) and OSRL (Xiong et al., 2018) . In future work, we hope to explore the impact of incorporating different Ontologies and knowledge graphs as well as alternative types of entity embeddings (Bordes et al., 2013; Lin et al., 2015; Wang et al., 2014; Nickel et al., 2011; Xiong et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 283, |
|
"end": 303, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 419, |
|
"end": 440, |
|
"text": "(Bordes et al., 2013)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 449, |
|
"end": 467, |
|
"text": "(Lin et al., 2015)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 477, |
|
"end": 496, |
|
"text": "(Wang et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 527, |
|
"text": "(Nickel et al., 2011)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 537, |
|
"end": 557, |
|
"text": "(Xiong et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 736, |
|
"text": "(Bordes et al., 2013;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 754, |
|
"text": "Lin et al., 2015;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 755, |
|
"end": 773, |
|
"text": "Wang et al., 2014;", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 774, |
|
"end": 794, |
|
"text": "Nickel et al., 2011;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 795, |
|
"end": 814, |
|
"text": "Xiong et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Limitations and Future Work", |
|
"sec_num": "6.6" |
|
}, |
|
{ |
|
"text": "In this paper we presented OSCR (Ontology-based Semantic Composition Regularization), a learned regularization method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during pretraining. We evaluated the impact of including OSCR when pretraining BERT with Wikipedia articles by measuring the performance when fine-tuning on two question answering tasks involving world knowledge and causal reasoning and one requiring domain (healthcare) knowledge and obtained 33.3 %, 18.6 %, and 4 % improved accuracy compared to pre-training BERT without OSCR.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "All code, data, and experiments are available on GitHub at https://github.com/h4ste/oscar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reproducibility", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.cyc.com/opencyc/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work was supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health, and utilized the computational resources of the NIH HPC Biowulf cluster (http://hpc.nih.gov).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Dbpedia: A nucleus for a web of open data", |
|
"authors": [ |
|
{ |
|
"first": "S\u00f6ren", |
|
"middle": [], |
|
"last": "Auer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Bizer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Georgi", |
|
"middle": [], |
|
"last": "Kobilarov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jens", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Cyganiak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Ives", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 6th International The Semantic Web and 2Nd Asian Conference on Asian Semantic Web Conference, ISWC'07/ASWC'07", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "722--735", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of the 6th International The Semantic Web and 2Nd Asian Conference on Asian Semantic Web Conference, ISWC'07/ASWC'07, pages 722- 735, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Recognizing question entailment for medical question answering", |
|
"authors": [ |
|
{ |
|
"first": "Asma", |
|
"middle": [], |
|
"last": "Ben Abacha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dina", |
|
"middle": [], |
|
"last": "Demner-Fushman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "AMIA 2016", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asma Ben Abacha and Dina Demner-Fushman. 2016. Recognizing question entailment for medical ques- tion answering. In AMIA 2016, American Medical In- formatics Association Annual Symposium, Chicago, IL, USA, November 12-16, 2016.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Translating embeddings for modeling multirelational data", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alberto", |
|
"middle": [], |
|
"last": "Garcia-Duran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oksana", |
|
"middle": [], |
|
"last": "Yakhnenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "26", |
|
"issue": "", |
|
"pages": "2787--2795", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2787-2795. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Semi-supervised sequence learning", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Quoc V", |
|
"middle": [], |
|
"last": "Dai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "28", |
|
"issue": "", |
|
"pages": "3079--3087", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3079-3087. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Embedding open-domain common-sense knowledge from text", |
|
"authors": [ |
|
{ |
|
"first": "Travis", |
|
"middle": [], |
|
"last": "Goodwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanda", |
|
"middle": [], |
|
"last": "Harabagiu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4621--4628", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Travis Goodwin and Sanda Harabagiu. 2016. Embed- ding open-domain common-sense knowledge from text. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 4621-4628, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Identifying personal stories in millions of weblog entries", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Gordon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Reid", |
|
"middle": [], |
|
"last": "Swanson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Third International Conference on Weblogs and Social Media, Data Challenge Workshop", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Gordon and Reid Swanson. 2009. Identifying personal stories in millions of weblog entries. In Third International Conference on Weblogs and So- cial Media, Data Challenge Workshop, San Jose, CA, volume 46.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Logic and conversation", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "H Paul Grice", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1975, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "41--58", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "H Paul Grice. 1975. Logic and conversation. 1975, pages 41-58.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Universal language model fine-tuning for text classification", |
|
"authors": [ |
|
{ |
|
"first": "Jeremy", |
|
"middle": [], |
|
"last": "Howard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "328--339", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning entity and relation embeddings for knowledge graph completion", |
|
"authors": [ |
|
{ |
|
"first": "Yankai", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2181--2187", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, AAAI'15, pages 2181-2187. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Nltk: The natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "63--70", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/1118108.1118117" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Com- putational Linguistics -Volume 1, ETMTNLP '02, pages 63-70, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Computational linguistics and deep learning", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Computational Linguistics", |
|
"volume": "41", |
|
"issue": "4", |
|
"pages": "701--707", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D Manning. 2015. Computational linguis- tics and deep learning. Computational Linguistics, 41(4):701-707.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Efficient estimation of word representations in vector space", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1301.3781" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Wordnet: a lexical database for english", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A corpus and cloze evaluation for deeper understanding of commonsense stories", |
|
"authors": [ |
|
{ |
|
"first": "Nasrin", |
|
"middle": [], |
|
"last": "Mostafazadeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushmeet", |
|
"middle": [], |
|
"last": "Kohli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Allen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "839--849", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. 2016. A cor- pus and cloze evaluation for deeper understanding of commonsense stories. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 839-849. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A three-way model for collective learning on multi-relational data", |
|
"authors": [ |
|
{ |
|
"first": "Maximilian", |
|
"middle": [], |
|
"last": "Nickel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans-Peter", |
|
"middle": [], |
|
"last": "Volker Tresp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kriegel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "809--816", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, pages 809-816, USA. Omnipress.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/D14-1162" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Semi-supervised sequence tagging with bidirectional language models", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chandra", |
|
"middle": [], |
|
"last": "Bhagavatula", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russell", |
|
"middle": [], |
|
"last": "Power", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1756--1765", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1161" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1756-1765. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Improving language understanding with unsupervised learning", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karthik", |
|
"middle": [], |
|
"last": "Narasimhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Karthik Narasimhan, Time Salimans, and Ilya Sutskever. 2018. Improving language un- derstanding with unsupervised learning. Technical report, Technical report, OpenAI.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "SQuAD: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "Pranav", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Konstantin", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2383--2392", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D16-1264" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Choice of plausible alternatives: An evaluation of commonsense causal reasoning", |
|
"authors": [ |
|
{ |
|
"first": "Melissa", |
|
"middle": [], |
|
"last": "Roemmele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew S", |
|
"middle": [], |
|
"last": "Cosmin Adrian Bejan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gordon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "2011 AAAI Spring Symposium Series", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alterna- tives: An evaluation of commonsense causal reason- ing. In 2011 AAAI Spring Symposium Series.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Open mind common sense: Knowledge acquisition from the general public", |
|
"authors": [ |
|
{ |
|
"first": "Push", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erik", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mueller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Grace", |
|
"middle": [], |
|
"last": "Lim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Travell", |
|
"middle": [], |
|
"last": "Perkins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wan", |
|
"middle": [ |
|
"Li" |
|
], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "On the Move to Meaningful Internet Systems, 2002 -DOA/CoopIS/ODBASE 2002 Confederated International Conferences DOA, CoopIS and ODBASE 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1223--1237", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Push Singh, Thomas Lin, Erik T. Mueller, Grace Lim, Travell Perkins, and Wan Li Zhu. 2002. Open mind common sense: Knowledge acquisition from the gen- eral public. In On the Move to Meaningful Internet Systems, 2002 -DOA/CoopIS/ODBASE 2002 Con- federated International Conferences DOA, CoopIS and ODBASE 2002, pages 1223-1237, Berlin, Hei- delberg. Springer-Verlag.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Robyn", |
|
"middle": [], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Chin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Havasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4444--4451", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In AAAI Conference on Artificial Intelligence, pages 4444-4451.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Ernie: Enhanced representation through knowledge integration", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shuohuan", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shikun", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuyi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Han", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danxiang", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hua", |
|
"middle": [], |
|
"last": "Hao Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. Ernie: Enhanced represen- tation through knowledge integration.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "GNU Parallel", |
|
"authors": [ |
|
{ |
|
"first": "Ole", |
|
"middle": [], |
|
"last": "Tange", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.5281/zenodo.1146014" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ole Tange. 2018. GNU Parallel 2018. Ole Tange.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "cloze procedure\": A new tool for measuring readability", |
|
"authors": [ |
|
{ |
|
"first": "Wilson", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Taylor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1953, |
|
"venue": "Journalism Bulletin", |
|
"volume": "30", |
|
"issue": "4", |
|
"pages": "415--433", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/107769905303000401" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wilson L. Taylor. 1953. \"cloze procedure\": A new tool for measuring readability. Journalism Bulletin, 30(4):415-433.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Attention is all you need", |
|
"authors": [ |
|
{ |
|
"first": "Ashish", |
|
"middle": [], |
|
"last": "Vaswani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Niki", |
|
"middle": [], |
|
"last": "Parmar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jakob", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aidan", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Gomez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Kaiser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "5998--6008", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Games with a purpose", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Von Ahn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Computer", |
|
"volume": "39", |
|
"issue": "6", |
|
"pages": "92--94", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/MC.2006.196" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. von Ahn. 2006. Games with a purpose. Computer, 39(6):92-94.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Knowledge graph embedding: A survey of approaches and applications", |
|
"authors": [ |
|
{ |
|
"first": "Q", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "29", |
|
"issue": "12", |
|
"pages": "2724--2743", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/TKDE.2017.2754499" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Q. Wang, Z. Mao, B. Wang, and L. Guo. 2017. Knowl- edge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724-2743.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Knowledge graph embedding by translating on hyperplanes", |
|
"authors": [ |
|
{ |
|
"first": "Zhen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianwen", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianlin", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI'14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1112--1119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by trans- lating on hyperplanes. In Proceedings of the Twenty- Eighth AAAI Conference on Artificial Intelligence, AAAI'14, pages 1112-1119. AAAI Press.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Embedding symbolic knowledge into deep networks", |
|
"authors": [ |
|
{ |
|
"first": "Yaqi", |
|
"middle": [], |
|
"last": "Xie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ziwei", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Mohan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kankanhalli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Kuldeep", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harold", |
|
"middle": [], |
|
"last": "Meel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Soh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "32", |
|
"issue": "", |
|
"pages": "4233--4243", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yaqi Xie, Ziwei Xu, Mohan S Kankanhalli, Kuldeep S Meel, and Harold Soh. 2019. Embedding sym- bolic knowledge into deep networks. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d Alch\u00e9-Buc, E. Fox, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 32, pages 4233- 4243. Curran Associates, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "One-shot relational learning for knowledge graphs", |
|
"authors": [ |
|
{ |
|
"first": "Wenhan", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shiyu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoxiao", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"Yang" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1980--1990", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2018. One-shot relational learning for knowledge graphs. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 1980-1990. Associ- ation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "ERNIE: Enhanced language representation with informative entities", |
|
"authors": [ |
|
{ |
|
"first": "Zhengyan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xu", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xin", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maosong", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1441--1451", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1139" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Example of a question requiring commonsense and causal reasoning(Roemmele et al., 2011) with entities highlighted.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Finite State Transducer (FST) used to detect entities during pretraining; each node corresponds to a word ID, double circles represent terminal states, and indicates the th pretrained entity embedding in Con-ceptNet's NumberBatch.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Example of a Story Cloze question (correct answer is A).", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Example of a Recognizing Question Entailment (RQE) question (correct answer is A).", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "The role of semantic composition in OSCR, is to learn a composed representation 1 , 2 , \u2022 \u2022 \u2022 , British policy during the Spanish Civil War was officially that of [MASK] ##intervention", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>for each entity detected in compose , Pre-training Sentence: 1 2 3 4 5 6 7 8 9 10 11 12</td><td>such that \u22ef 13</td><td>.</td><td>=</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "Accuracy of fine-tuned BERT after pretraining on the Cirrus Wikipedia data with and without OSCR.", |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>entails answering the CHQ, as illustrated in Fig-</td></tr><tr><td>ure 5.</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |