ACL-OCL / Base_JSON /prefixD /json /deelio /2020.deelio-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:37.030712Z"
},
"title": "Common Sense or World Knowledge? Investigating Adapter-Based Knowledge Injection into Pretrained Transformers",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {
"country": "Germany"
}
},
"email": ""
},
{
"first": "Olga",
"middle": [],
"last": "Majewska",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u2660",
"middle": [],
"last": "Leonardo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "F",
"middle": [
"R"
],
"last": "Ribeiro",
"suffix": "",
"affiliation": {
"laboratory": "Ubiquitous Knowledge Processing (UKP) Lab",
"institution": "",
"location": {
"settlement": "Darmstadt",
"region": "TU",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": "",
"affiliation": {
"laboratory": "Ubiquitous Knowledge Processing (UKP) Lab",
"institution": "",
"location": {
"settlement": "Darmstadt",
"region": "TU",
"country": "Germany"
}
},
"email": ""
},
{
"first": "Nikolai",
"middle": [],
"last": "Rozanov",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "\u2660",
"middle": [],
"last": "Goran Glava\u0161",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Mannheim",
"location": {
"country": "Germany"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Following the major success of neural language models (LMs) such as BERT or GPT-2 on a variety of language understanding tasks, recent work focused on injecting (structured) knowledge from external resources into these models. While on the one hand, joint pretraining (i.e., training from scratch, adding objectives based on external knowledge to the primary LM objective) may be prohibitively computationally expensive, post-hoc fine-tuning on external knowledge, on the other hand, may lead to the catastrophic forgetting of distributional knowledge. In this work, we investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus, respectively, using adapter training. While overall results on the GLUE benchmark paint an inconclusive picture, a deeper analysis reveals that our adapter-based models substantially outperform BERT (up to 15-20 performance points) on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS. We also open source all our experiments and relevant code under: https://github.com/ wluper/retrograph.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Following the major success of neural language models (LMs) such as BERT or GPT-2 on a variety of language understanding tasks, recent work focused on injecting (structured) knowledge from external resources into these models. While on the one hand, joint pretraining (i.e., training from scratch, adding objectives based on external knowledge to the primary LM objective) may be prohibitively computationally expensive, post-hoc fine-tuning on external knowledge, on the other hand, may lead to the catastrophic forgetting of distributional knowledge. In this work, we investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus, respectively, using adapter training. While overall results on the GLUE benchmark paint an inconclusive picture, a deeper analysis reveals that our adapter-based models substantially outperform BERT (up to 15-20 performance points) on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS. We also open source all our experiments and relevant code under: https://github.com/ wluper/retrograph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Self-supervised neural models like ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019; Liu et al., 2019b) , GPT (Radford et al., 2018 (Radford et al., , 2019 , or XL-Net (Yang et al., 2019) have rendered language modeling a very suitable pretraining task for learning language representations that are useful for a wide range of language understanding tasks (Wang et al., 2018 (Wang et al., , 2019 . Although shown versatile w.r.t. the types of knowledge (Rogers et al., 2020) they encode, much like their predecessors -static word embedding models (Mikolov et al., 2013; Pennington et al., 2014) -neural LMs still only \"consume\" the distributional information from large corpora. Yet, a number of structured knowledge sources exist -knowledge bases (KBs) (Suchanek et al., 2007; Auer et al., 2007) and lexico-semantic networks (Miller, 1995; Liu and Singh, 2004; Navigli and Ponzetto, 2010 ) -encoding many types of knowledge that are underrepresented in text corpora.",
"cite_spans": [
{
"start": 40,
"end": 61,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 69,
"end": 90,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 91,
"end": 109,
"text": "Liu et al., 2019b)",
"ref_id": "BIBREF14"
},
{
"start": 116,
"end": 137,
"text": "(Radford et al., 2018",
"ref_id": "BIBREF23"
},
{
"start": 138,
"end": 161,
"text": "(Radford et al., , 2019",
"ref_id": "BIBREF24"
},
{
"start": 174,
"end": 193,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF39"
},
{
"start": 362,
"end": 380,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF35"
},
{
"start": 381,
"end": 401,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF34"
},
{
"start": 459,
"end": 480,
"text": "(Rogers et al., 2020)",
"ref_id": null
},
{
"start": 553,
"end": 575,
"text": "(Mikolov et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 576,
"end": 600,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 760,
"end": 783,
"text": "(Suchanek et al., 2007;",
"ref_id": "BIBREF32"
},
{
"start": 784,
"end": 802,
"text": "Auer et al., 2007)",
"ref_id": "BIBREF0"
},
{
"start": 832,
"end": 846,
"text": "(Miller, 1995;",
"ref_id": "BIBREF16"
},
{
"start": 847,
"end": 867,
"text": "Liu and Singh, 2004;",
"ref_id": "BIBREF12"
},
{
"start": 868,
"end": 894,
"text": "Navigli and Ponzetto, 2010",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Starting from this observation, most recent efforts focused on injecting factual (Zhang et al., 2019; Liu et al., 2019a; Peters et al., 2019) and linguistic knowledge (Lauscher et al., 2019; Peters et al., 2019) into pretrained LMs and demonstrated the usefulness of such knowledge in language understanding tasks (Wang et al., 2018 (Wang et al., , 2019 . Joint pretraining models, on the one hand, augment distributional LM objectives with additional objectives based on external resources (Yu and Dredze, 2014; Nguyen et al., 2016; Lauscher et al., 2019) and train the extended model from scratch. For models like BERT, this implies computationally expensive retraining from scratch of the encoding transformer network. Post-hoc fine-tuning models (Zhang et al., 2019; Liu et al., 2019a; Peters et al., 2019) , on the other hand, use the objectives based on external resources to fine-tune the encoder's parameters, pretrained via distributional LM objectives. If the amount of fine-tuning data is substantial, however, this approach may lead to catastrophic forgetting of distributional knowledge obtained in pretraining (Goodfellow et al., 2014; Kirkpatrick et al., 2017) .",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Zhang et al., 2019;",
"ref_id": "BIBREF41"
},
{
"start": 102,
"end": 120,
"text": "Liu et al., 2019a;",
"ref_id": null
},
{
"start": 121,
"end": 141,
"text": "Peters et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 167,
"end": 190,
"text": "(Lauscher et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 191,
"end": 211,
"text": "Peters et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 314,
"end": 332,
"text": "(Wang et al., 2018",
"ref_id": "BIBREF35"
},
{
"start": 333,
"end": 353,
"text": "(Wang et al., , 2019",
"ref_id": "BIBREF34"
},
{
"start": 491,
"end": 512,
"text": "(Yu and Dredze, 2014;",
"ref_id": "BIBREF40"
},
{
"start": 513,
"end": 533,
"text": "Nguyen et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 534,
"end": 556,
"text": "Lauscher et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 750,
"end": 770,
"text": "(Zhang et al., 2019;",
"ref_id": "BIBREF41"
},
{
"start": 771,
"end": 789,
"text": "Liu et al., 2019a;",
"ref_id": null
},
{
"start": 790,
"end": 810,
"text": "Peters et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1124,
"end": 1149,
"text": "(Goodfellow et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 1150,
"end": 1175,
"text": "Kirkpatrick et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, similar to the concurrent work of Wang et al. (2020) , we turn to the recently proposed adapter-based fine-tuning paradigm (Rebuffi et al., 2018; Houlsby et al., 2019) , which remedies the shortcomings of both joint pretraining and standard post-hoc fine-tuning. Adapterbased training injects additional parameters into the encoder and only tunes their values: original transformer parameters are kept fixed. Be-cause of this, adapter training preserves the distributional information obtained in LM pretraining, without the need for any distributional (re-)training. While (Wang et al., 2020) inject factual knowledge from Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) into BERT, in this work, we investigate two resources that are commonly assumed to contain generalpurpose and common sense knowledge: 1 Concept-Net (Liu and Singh, 2004; Speer et al., 2017) and the Open Mind Common Sense (OMCS) corpus (Singh et al., 2002) , from which the ConceptNet graph was (semi-)automatically extracted. For our first model, dubbed CN-ADAPT, we first create a synthetic corpus by randomly traversing the Con-ceptNet graph and then learn adapter parameters with masked language modelling (MLM) training (Devlin et al., 2019) on that synthetic corpus. For our second model, named OM-ADAPT, we learn the adapter parameters via MLM training directly on the OMCS corpus.",
"cite_spans": [
{
"start": 48,
"end": 66,
"text": "Wang et al. (2020)",
"ref_id": null
},
{
"start": 137,
"end": 159,
"text": "(Rebuffi et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 160,
"end": 181,
"text": "Houlsby et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 588,
"end": 607,
"text": "(Wang et al., 2020)",
"ref_id": null
},
{
"start": 647,
"end": 677,
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF33"
},
{
"start": 826,
"end": 847,
"text": "(Liu and Singh, 2004;",
"ref_id": "BIBREF12"
},
{
"start": 848,
"end": 867,
"text": "Speer et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 913,
"end": 933,
"text": "(Singh et al., 2002)",
"ref_id": "BIBREF29"
},
{
"start": 1202,
"end": 1223,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate both models on the GLUE benchmark, where we observe limited improvements over BERT on a subset of GLUE tasks. However, a more detailed inspection reveals large improvements over the base BERT model (up to 20 Matthews correlation points) on language inference (NLI) subsets labeled as requiring World Knowledge or knowledge about Named Entities. Investigating further, we relate this result to the fact that ConceptNet and OMCS contain much more of what in downstream is considered to be factual world knowledge than what is judged as common sense knowledge. Our findings pinpoint the need for more detailed analyses of compatibility between (1) the types of knowledge contained by external resources; and (2) the types of knowledge that benefit concrete downstream tasks; within the emerging body of work on injecting knowledge into pretrained transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we are primarily set to investigate if injecting specific types of knowledge (given in the external resource) benefits downstream inference that clearly requires those exact types of knowledge. Because of this, we use the arguably most straightforward mechanisms for injecting the Con-ceptNet and OMCS information into BERT, and leave the exploration of potentially more effective knowledge injection objectives for future work. We inject the external information into adapter parameters of the adapter-augmented BERT (Houlsby et al., 2019 ) via BERT's natural objective -masked language modelling (MLM). OMCS, already a corpus in natural language, is directly subjectable to MLM training -we filtered out non-English sentences. To subject ConceptNet to MLM training, we need to transform it into a synthetic corpus.",
"cite_spans": [
{
"start": 532,
"end": 553,
"text": "(Houlsby et al., 2019",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "Unwrapping ConceptNet. Following established previous work (Perozzi et al., 2014; Ristoski and Paulheim, 2016) , we induce a synthetic corpus from ConceptNet by randomly traversing its graph. We convert relation strings into NL phrases (e.g., synonyms to is a synonym of ) and duplicate the object node of a triple, using it as the subject for the next sentence. For example, from the path \"alcoholism",
"cite_spans": [
{
"start": 59,
"end": 81,
"text": "(Perozzi et al., 2014;",
"ref_id": "BIBREF20"
},
{
"start": 82,
"end": 110,
"text": "Ristoski and Paulheim, 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "causes \u2212 \u2212\u2212\u2212 \u2192 stigma hasContext \u2212 \u2212\u2212\u2212\u2212\u2212 \u2192 christianity partOf \u2212\u2212\u2212\u2192 religion\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "we create the text \"alcoholism causes stigma. stigma is used in the context of christianity. christianity is part of religion.\". We set the walk lengths to 30 relations and sample the starting and neighboring nodes from uniform distributions. In total, we performed 2,268,485 walks, resulting with the corpus of 34,560,307 synthetic sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "Adapter-Based Training. We follow Houlsby et al. (2019) and adopt the adapter-based architecture for which they report solid performance across the board. We inject bottleneck adapters into BERT's transformer layers. In each transformer layer, we insert two bottleneck adapters: one after the multi-head attention sub-layer and another after the feed-forward sub-layer. Let X \u2208 R T \u00d7H be the sequence of contextualized vectors (of size H) for the input of T tokens in some transformer layer, input to a bottleneck adapter. The bottleneck adapter, consisting of two feed-forward layers and a residual connection, yields the following output:",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "Houlsby et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "Adapter (X) = X + f (XW d + b d ) W u + b u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "where W d (with bias b d ) and W u (with bias b u ) are adapter's parameters, that is, the weights of the linear down-projection and up-projection sub-layers and f is the non-linear activation function. Matrix W d \u2208 R H\u00d7m compresses vectors in X to the adapter size m < H, and the matrix W u \u2208 R m\u00d7H projects the activated downprojections back to transformer's hidden size H.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "The ratio H/m determines how many times fewer parameters we optimize with adapter-based training compared to standard fine-tuning of all transformer's parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge Injection Models",
"sec_num": "2"
},
{
"text": "We first briefly describe the downstream tasks and training details, and then proceed with the discussion of results obtained with our adapter models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3"
},
{
"text": "Downstream Tasks. We evaluate BERT and our two adapter-based models, CN-ADAPT and OM-ADAPT, with injected knowledge from ConceptNet and OMCS, respectively, on the tasks from the GLUE benchmark (Wang et al., 2018) :",
"cite_spans": [
{
"start": 193,
"end": 212,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup.",
"sec_num": "3.1"
},
{
"text": "CoLA ( MNLI (Williams et al., 2018) : Ternary natural language inference (NLI) classification of sentence pairs. Two test sets are given: a matched version (MNLI-m) in which the test domains match the domains from training data, and a mismatched version (MNLI-mm) with different test domains;",
"cite_spans": [
{
"start": 12,
"end": 35,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup.",
"sec_num": "3.1"
},
{
"text": "QNLI: A binary classification version of the Stanford Q&A dataset (Rajpurkar et al., 2016) ; RTE (Bentivogli et al., 2009) : Another NLI dataset, ternary entailment classification for sentence pairs;",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF25"
},
{
"start": 97,
"end": 122,
"text": "(Bentivogli et al., 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup.",
"sec_num": "3.1"
},
{
"text": "Diag (Wang et al., 2018) : A manually curated NLI dataset, with examples labeled with specific types of knowledge needed for entailment decisions.",
"cite_spans": [
{
"start": 5,
"end": 24,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup.",
"sec_num": "3.1"
},
{
"text": "Training Details. We inject our adapters into a BERT Base model (12 transformer layers with 12 attention heads each; H = 768) pretrained on lowercased corpora. Following (Houlsby et al., 2019) , we set the size of all adapters to m = 64 and use GELU (Hendrycks and Gimpel, 2016) as the adapter activation f . We train the adapter parameters with the Adam algorithm (Kingma and Ba, 2015) (initial learning rate set to 1e \u22124 , with 10000 warm-up steps and the weight decay factor of 0.01). In downstream fine-tuning, we train in batches of size 16 and limit the input sequences to T = 128 wordpiece tokens. For each task, we find the optimal hyperparameter configuration from the following grid: learning rate l \u2208 {2 \u2022 10 \u22125 , 3 \u2022 10 \u22125 }, epochs in n \u2208 {3, 4}.",
"cite_spans": [
{
"start": 170,
"end": 192,
"text": "(Houlsby et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 250,
"end": 278,
"text": "(Hendrycks and Gimpel, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup.",
"sec_num": "3.1"
},
{
"text": "GLUE Results. Table 1 reveals the performance of CN-ADAPT and OM-ADAPT in comparison with BERT Base on GLUE evaluation tasks. We show the results for two snapshots of OM-ADAPT, after 25K and 100K update steps, and for two snapshots of CN-ADAPT, after 50K and 100K steps of adapter training. Overall, none of our adapterbased models with injected external knowledge from ConceptNet or OMCS yields significant improvements over BERT Base on GLUE. However, we observe substantial improvements (of around 3 points) on RTE and on the Diagnostics NLI dataset (Diag), which encompasses inference instances that require a specific type of knowledge.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "3.2"
},
{
"text": "Since our adapter models draw specifically on the conceptual knowledge encoded in ConceptNet and OMCS, we expect the positive impact of injected external knowledge -assuming effective injection -to be most observable on test instances that target the same types of conceptual knowledge. To investigate this further, we measure the model performance across different categories of the Diagnostic NLI dataset. This allows us to tease apart inference instances which truly test the efficacy of our knowledge injection methods. We show the results obtained on different categories of the Diagnostic NLI dataset in Table 2 . The improvements of our adapter-based models over BERT Base on these phenomenon-specific subsections of the Diagnostics NLI dataset are generally much more pronounced: e.g., OM-ADAPT (25K) yields a 7% improvement on inference that requires factual or common sense knowledge (KNO), whereas CN-ADAPT (100K) yields a 6% boost for inference that depends on lexico-semantic knowledge (LS). These results suggest that (1) ConceptNet and OMCS do contain the specific types of knowledge required for these inference categories and that (2) we managed to inject that knowledge into BERT by training adapters on these resources.",
"cite_spans": [],
"ref_spans": [
{
"start": 610,
"end": 617,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "3.2"
},
{
"text": "Fine-Grained Knowledge Type Analysis. In our final analysis, we \"zoom in\" our models' performances on three fine-grained categories of the Diagnostics NLI dataset -inference instances that require Common Sense Knowledge (CS), World Knowledge (World), and knowledge about Named Entities (NE), respectively. The results for these fine-grained categories are given in In contrast, many of the CS inference instances require complex, high-level reasoning, understanding metaphorical and idiomatic meaning, and making far-reaching connections. We display NLI Dignostics examples from the World Knowledge and Common Sense categories in Table 4 . In such cases, explicit conceptual links often do not suffice for a correct inference and much of the required knowledge is not explicitly encoded in the external resources. Consider, e.g., the following CS NLI instance: [premise: My jokes fully reveal my character ; hypothesis: If everyone believed my jokes, they'd know exactly who I was ; entailment]. While ConceptNet and OMCS may associate character with personality or personality with identity, the knowledge that the phrase who I was may refer to identity is beyond the explicit knowledge present in these resources. This sheds light on the results in Table 3 : when the knowledge required to tackle the inference problem at hand is available in the external resource, our adapter-based knowledge-injected models significantly outperform the baseline transformer; otherwise, the benefits of knowledge injection are neg-",
"cite_spans": [],
"ref_spans": [
{
"start": 630,
"end": 637,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1251,
"end": 1258,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results and Analysis",
"sec_num": "3.2"
},
{
"text": "Premise Hypothesis ConceptNet?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Knowledge",
"sec_num": null
},
{
"text": "The sides came to an agreement after their meeting in Stockholm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "The sides came to an agreement after their meeting in Sweden.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "stockholm [partOf] sweden Musk decided to offer up his personal Tesla roadster.",
"cite_spans": [
{
"start": 10,
"end": 18,
"text": "[partOf]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "Musk decided to offer up his personal car.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "roadster [isA] car",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "The Sydney area has been inhabited by indigenous Australians for at least 30,000 years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "The Sydney area has been inhabited by Aboriginal people for at least 30,000 years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "indigenous [synonymOf] aboriginal Common Sense My jokes fully reveal my character.",
"cite_spans": [
{
"start": 11,
"end": 22,
"text": "[synonymOf]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "If everyone believed my jokes, they'd know exactly who I was. The systems thus produced are incremental: dialogues are processed word-byword, shown previously to be essential in supporting natural, spontaneous dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "The systems thus produced support the capability to interrupt an interlocutor midsentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "He deceitfully proclaimed: \"This is all I ever really wanted.\" He was satisfied. ligible or non-existent. The promising results on world knowledge and named entities portions of the Diagnostics dataset suggest that our methods does successfully inject external information into the pretrained transformer and that the presence of the required knowledge for the task in the external resources is an obvious prerequisite.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "World",
"sec_num": null
},
{
"text": "We presented two simple strategies for injecting external knowledge from ConceptNet and OMCS corpus, respectively, into BERT via bottleneck adapters. Additional adapter parameters store the external knowledge and allow for the preservation of the rich distributional knowledge acquired in BERT's pretraining in the original transformer parameters. We demonstrated the effectiveness of these models in language understanding settings that require precisely the type of knowledge that one finds in ConceptNet and OMCS, in which our adapter-based models outperform BERT by up to 20 performance points. Our findings stress the importance of having detailed analyses that com-pare (a) the types of knowledge found in external resources being injected against (b) the types of knowledge that a concrete downstream reasoning tasks requires. We hope this work motivates further research effort in the direction of fine-grained knowledge typing, both of explicit knowledge in external resources and the implicit knowledge stored in pretrained transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "Our results in \u00a73.2 scrutinize this assumption.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Anne Lauscher and Goran Glava\u0161 are supported by the Eliteprogramm of the Baden-W\u00fcrttemberg Stiftung (AGREE grant). Leonardo F. R. Ribeiro has been supported by the German Research Foundation as part of the Research Training Group AIPHES under the grant No. GRK 1994/1. This work has been supported by the German Research Foundation within the project \"Open Argument Mining\" (GU 798/25-1), associated with the Priority Program \"Robust Argumentation Machines (RATIO)\" (SPP-1999). The work of Olga Majewska was conducted under the research lab of Wluper Ltd. (UK/ 10195181).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "The semantic web",
"volume": "",
"issue": "",
"pages": "722--735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The semantic web, pages 722-735. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The fifth pascal recognizing textual entailment challenge",
"authors": [
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing tex- tual entailment challenge. In TAC.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "1--14",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2001"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, I\u00f1igo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quora question pairs",
"authors": [
{
"first": "Zihan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hongbo",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiaoji",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Leqi",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. 2018. Quora question pairs.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An empirical investigation of catastrophic forgeting in gradientbased neural networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ian",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Aaron Courville Da",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian J Goodfellow, Mehdi Mirza, Aaron Courville Da Xiao, and Yoshua Bengio. 2014. An empirical investigation of catastrophic forgeting in gradient- based neural networks. In In Proceedings of Inter- national Conference on Learning Representations (ICLR. Citeseer.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Gaussian error linear units (gelus)",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian er- ror linear units (gelus).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Parameter-efficient transfer learning for nlp",
"authors": [
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Giurgiu",
"suffix": ""
},
{
"first": "Stanislaw",
"middle": [],
"last": "Jastrzebski",
"suffix": ""
},
{
"first": "Bruna",
"middle": [],
"last": "Morrone",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "De Laroussilhe",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Gesmundo",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Attariyan",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2790--2799",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Overcoming catastrophic forgetting in neural networks",
"authors": [
{
"first": "James",
"middle": [],
"last": "Kirkpatrick",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Pascanu",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Rabinowitz",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Veness",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Desjardins",
"suffix": ""
},
{
"first": "Andrei",
"middle": [
"A"
],
"last": "Rusu",
"suffix": ""
},
{
"first": "Kieran",
"middle": [],
"last": "Milan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Tiago",
"middle": [],
"last": "Ramalho",
"suffix": ""
},
{
"first": "Agnieszka",
"middle": [],
"last": "Grabska-Barwinska",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the national academy of sciences",
"volume": "114",
"issue": "",
"pages": "3521--3526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, et al. 2017. Over- coming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521-3526.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Informing unsupervised pretraining with external linguistic knowledge",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Lauscher",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02339"
]
},
"num": null,
"urls": [],
"raw_text": "Anne Lauscher, Ivan Vuli\u0107, Edoardo Maria Ponti, Anna Korhonen, and Goran Glava\u0161. 2019. Inform- ing unsupervised pretraining with external linguistic knowledge. arXiv preprint arXiv:1909.02339.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Conceptnet-a practical commonsense reasoning tool-kit",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2004,
"venue": "BT technology journal",
"volume": "22",
"issue": "4",
"pages": "211--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiruo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.07606"
]
},
"num": null,
"urls": [],
"raw_text": "Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2019a. K-bert: Enabling language representation with knowledge graph. arXiv preprint arXiv:1909.07606.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "RoBERTa: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Wordnet: a lexical database for english",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Babelnet: Building a very large multilingual semantic network",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Simone",
"middle": [
"Paolo"
],
"last": "Ponzetto",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "216--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2010. Ba- belnet: Building a very large multilingual semantic network. In Proceedings of the 48th annual meet- ing of the association for computational linguistics, pages 216-225. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Integrating distributional lexical contrast into word embeddings for antonymsynonym distinction",
"authors": [
{
"first": "Sabine",
"middle": [],
"last": "Kim Anh Nguyen",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Schulte Im Walde",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Anh Nguyen, Sabine Schulte im Walde, and Ngoc Thang Vu. 2016. Integrating distributional lexical contrast into word embeddings for antonym- synonym distinction. In Proceedings of ACL, pages 454-459.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Deepwalk: Online learning of social representations",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14",
"volume": "",
"issue": "",
"pages": "701--710",
"other_ids": {
"DOI": [
"10.1145/2623330.2623732"
]
},
"num": null,
"urls": [],
"raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social represen- tations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, page 701-710, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227-2237.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Knowledge enhanced contextual word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Logan",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Vidur",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "43--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 43-54.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "OpenAI Technical Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. OpenAI Tech- nical Report.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Efficient parametrization of multidomain deep neural networks",
"authors": [
{
"first": "Hakan",
"middle": [],
"last": "Sylvestre-Alvise Rebuffi",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Bilen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vedaldi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2018. Efficient parametrization of multi- domain deep neural networks. In CVPR.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Rdf2vec: Rdf graph embeddings for data mining",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Ristoski",
"suffix": ""
},
{
"first": "Heiko",
"middle": [],
"last": "Paulheim",
"suffix": ""
}
],
"year": 2016,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "498--514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Ristoski and Heiko Paulheim. 2016. Rdf2vec: Rdf graph embeddings for data mining. In Inter- national Semantic Web Conference, pages 498-514. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "2020. A primer in bertology: What we know about how bert works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.12327"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. arXiv preprint arXiv:2002.12327.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Open mind common sense: Knowledge acquisition from the general public",
"authors": [
{
"first": "Push",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"T"
],
"last": "Mueller",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Travell",
"middle": [],
"last": "Perkins",
"suffix": ""
},
{
"first": "Wan",
"middle": [
"Li"
],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "OTM Confederated International Conferences\" On the Move to Meaningful Internet Systems",
"volume": "",
"issue": "",
"pages": "1223--1237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Push Singh, Thomas Lin, Erik T Mueller, Grace Lim, Travell Perkins, and Wan Li Zhu. 2002. Open mind common sense: Knowledge acquisition from the general public. In OTM Confederated International Conferences\" On the Move to Meaningful Internet Systems\", pages 1223-1237. Springer.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Chin",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-First AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-First AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706. ACM.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "Communications of the ACM",
"volume": "57",
"issue": "10",
"pages": "78--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Superglue: A stickier benchmark for general-purpose language understanding systems",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yada",
"middle": [],
"last": "Pruksachatkun",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3261--3275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3261-3275.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Blacbox NLP Workshop",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the Blacbox NLP Workshop, pages 353- 355.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Neural network acceptability judgments",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Warstadt",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Samuel R",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.12471"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1112--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1906.08237"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. arXiv preprint arXiv:1906.08237.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Improving lexical embeddings with semantic knowledge",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "545--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu and Mark Dredze. 2014. Improving lexical em- beddings with semantic knowledge. In Proceedings of ACL, pages 545-550.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1441-1451.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Warstadt et al., 2018): Binary sentence classification, predicting grammatical acceptability of sentences from linguistic publications; SST-2 (Socher et al., 2013): Binary sentence classification, predicting binary sentiment (positive or negative) for movie review sentences; MRPC (Dolan and Brockett, 2005): Binary sentence-pair classification, recognizing sentences which are are mutual paraphrases; STS-B (Cer et al., 2017): Sentence-pair regression task, predicting the degree of semantic similarity for a given pair of sentences; QQP (Chen et al., 2018): Binary classification task, recognizing question paraphrases;",
"type_str": "figure",
"uris": null
},
"TABREF1": {
"content": "<table><tr><td>Model</td><td>LS KNO LOG PAS All</td></tr><tr><td>BERT Base</td><td>38.5 20.2 26.7 39.6 34.2</td></tr><tr><td>OM-ADAPT (25K)</td><td>39.1 27.1 26.1 39.5 35.7</td></tr><tr><td colspan=\"2\">OM-ADAPT (100K) 37.5 21.2 27.4 41.0 34.8</td></tr><tr><td>CN-ADAPT (50K)</td><td>40.2 24.3 30.1 42.7 37.0</td></tr><tr><td colspan=\"2\">CN-ADAPT (100K) 44.2 25.2 30.4 41.9 37.8</td></tr></table>",
"text": "Results on test portions of GLUE benchmark tasks. Numbers in brackets next to adapter-based models (25K, 50K, 100K) indicate the number of update steps of adapter training on the synthetic ConceptNet corpus (for CN-ADAPT) or on the original OMCS corpus (for OM-ADAPT). Bold: the best score in each column.",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table><tr><td colspan=\"4\">: Breakdown of Diagnostics NLI performance</td></tr><tr><td colspan=\"4\">(Matthews correlation), according to information type</td></tr><tr><td colspan=\"4\">needed for inference (coarse-grained categories): Lexi-</td></tr><tr><td colspan=\"4\">cal Semantics (LS), Knowledge (KNO), Logic (LOG),</td></tr><tr><td colspan=\"3\">and Predicate Argument Structure (PAS).</td><td/></tr><tr><td>Model</td><td>CS</td><td>World</td><td>NE</td></tr><tr><td>BERT Base</td><td>29.0</td><td>10.3</td><td>15.1</td></tr><tr><td>OM-ADAPT (25K)</td><td>28.5</td><td>25.3</td><td>31.4</td></tr><tr><td>OM-ADAPT (100K)</td><td>24.5</td><td>17.3</td><td>22.3</td></tr><tr><td>CN-ADAPT (50K)</td><td>25.6</td><td>21.1</td><td>26.0</td></tr><tr><td>CN-ADAPT (100K)</td><td>24.4</td><td>25.6</td><td>36.5</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>: Results (Matthews correlation) on Common</td></tr><tr><td>Sense (CS), World Knowledge (World), and Named En-</td></tr><tr><td>tities (NE) categories of the Diagnostic NLI dataset.</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>. These</td></tr><tr><td>results show an interesting pattern: our adapter-</td></tr><tr><td>based knowledge-injection models massively out-</td></tr><tr><td>perform BERT Base (up to 15 and 21 MCC points,</td></tr><tr><td>respectively) for NLI instances labeled as requir-</td></tr><tr><td>ing World Knowledge or knowledge about Named</td></tr><tr><td>Entities. In contrast, we see drops in performance</td></tr><tr><td>on instances labeled as requiring common sense</td></tr></table>",
"text": "",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Premise-hypothesis examples from the diagnostic NLI dataset tagged for commonsense and world knowledge, and relevant ConceptNet relations, where available.",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}