ACL-OCL / Base_JSON /prefixD /json /deelio /2022.deelio-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:21:45.902654Z"
},
"title": "Jointly Identifying and Fixing Inconsistent Readings from Information Extraction Systems",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Padia",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Francis",
"middle": [],
"last": "Ferraro",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Information extraction systems analyze text to produce entities and beliefs, but their output often has errors. In this paper we analyze the reading consistency of the extracted facts with respect to the text from which they were derived and show how to detect and correct errors. We consider both the scenario when the provenance text is automatically found by an IE system and when it is curated by humans. We contrast consistency with credibility; define and explore consistency and repair tasks; and demonstrate a simple, yet effective and generalizable, model. We analyze these tasks and evaluate this approach on three datasets. Against a strong baseline model, we consistently improve both consistency and repair across three datasets using a simple MLP model with attention and lexical features.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Information extraction systems analyze text to produce entities and beliefs, but their output often has errors. In this paper we analyze the reading consistency of the extracted facts with respect to the text from which they were derived and show how to detect and correct errors. We consider both the scenario when the provenance text is automatically found by an IE system and when it is curated by humans. We contrast consistency with credibility; define and explore consistency and repair tasks; and demonstrate a simple, yet effective and generalizable, model. We analyze these tasks and evaluate this approach on three datasets. Against a strong baseline model, we consistently improve both consistency and repair across three datasets using a simple MLP model with attention and lexical features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Information Extraction (IE) systems read text to extract entities, and relations and create beliefs represented in a knowledge graph. Current systems though are far from perfect: e.g., in the 2017 Text Analysis Conference (TAC) Knowledge Base Population task, participants created knowledge graphs with relations like cause of death and city of headquarters from news corpora (Dang, 2017) . When manually evaluated, no system had achieved an F1 score above 0.3 (Rajput, 2017) .",
"cite_spans": [
{
"start": 376,
"end": 388,
"text": "(Dang, 2017)",
"ref_id": null
},
{
"start": 461,
"end": 475,
"text": "(Rajput, 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One reason for such low scores is inconsistency between the text and the extracted beliefs. We consider a belief to be consistent if the text from which it was extracted linguistically supports it (regardless of any logical or real-world factual truth). We show the difference between consistent and inconsistent readings, along with a potential correction, in Fig. 1 . In Fig. 1a , the system considered Harry Reid was charged with an assault, which is not consistent with the provenance sentence. In Fig. 1b the system is consistent in constructing its belief.",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 367,
"text": "Fig. 1",
"ref_id": null
},
{
"start": 373,
"end": 380,
"text": "Fig. 1a",
"ref_id": null
},
{
"start": 502,
"end": 509,
"text": "Fig. 1b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Belief learned by IE system: per:charges(Harry Reid, assault) Provenance identified by IE system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nevada's Harry Reid switches longtime stance to support assault weapon ban Analysis output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Is reading consistent: Inconsistent Suggested relation: no repair (a) An inconsistent reading with no correction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Belief learned by IE system: per:cause_of_death(Edward Hardman, Typhoid fever) Provenance identified by IE system:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Western Australian government agreed to offer the Government Geologist post to Hardman shortly before news of his death reached them . Early in April , he contracted typhoid fever , and died a few days later in a Dublin hospital on 6 April Analysis output:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Is reading consistent: Consistent Suggested relation: per:cause_of_death (b) A consistent reading not requiring a correction. Notice the relation is unchanged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: Examples of beliefs extracted from real IE systems on the TAC 2015 English news corpus, demonstrating the consistency and repair tasks. Multiple sentences can contribute to a belief (1b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We study two problems: (i) whether an extracted belief is consistent with its text (called consistency), and (ii) correcting it if not (called repair). We believe we are the first to study these problems jointly. We model these problems jointly, arguing that addressing both of these is important and can benefit one another. Our use of consistency here refers to a language-based sense that text supports the belief even if its contradicts world knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We are concerned with methods that can be standalone-that is, reliant on neither a precise schema (Ojha and Talukdar, 2017) nor an ensemble of IE systems, e.g., Yu et al. (2014) ; Viswanathan et al. (2015) . Previous work on determining the consistency of an IE extraction was not standalone. We want a standalone approach because the results from non-standalone approaches cannot be applied when only the beliefs and associated provenance text is available without the IE ensemble systems and schema. (For this study we consider English beliefs and provenance sentences.) Parallel to the broad IE domain, schema-free and standalone systems have been developed to verify the credibility of news claims (Popat et al., 2018; Riedel et al., 2017a; Rashkin et al., 2017 ), but we are not aware of a study of their performance on IE system tasks. We incorporate these credibility systems into our study in order to determine their applicability for our tasks. We make the following contributions.",
"cite_spans": [
{
"start": 98,
"end": 123,
"text": "(Ojha and Talukdar, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 161,
"end": 177,
"text": "Yu et al. (2014)",
"ref_id": "BIBREF27"
},
{
"start": 180,
"end": 205,
"text": "Viswanathan et al. (2015)",
"ref_id": "BIBREF25"
},
{
"start": 702,
"end": 722,
"text": "(Popat et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 723,
"end": 744,
"text": "Riedel et al., 2017a;",
"ref_id": "BIBREF19"
},
{
"start": 745,
"end": 765,
"text": "Rashkin et al., 2017",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A study of real IE inconsistencies. We catalog and examine the understudied aspect of languagebased consistency ( \u00a73).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A novel framework. To our knowledge we are the first to study and propose a framework for joint consistency and repair ( \u00a74).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Analysis of techniques. We show the effectiveness of straightforward techniques compared to more complicated approaches ( \u00a75).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Study of different provenance settings. We consider and contrast cases where provenance sentences are retrieved by an IE system (as in TAC) vs. where they are curated by humans (as in Zhang et al. (2017, TACRED) ).",
"cite_spans": [
{
"start": 184,
"end": 211,
"text": "Zhang et al. (2017, TACRED)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We say the belief was consistently read if the text lexically supports the belief. While this can be viewed as a lexical entailment, it is not a logical, causal, or broader inferential/knowledge entailment. For example the belief <Barack Obama,per:president_of,Kenya> is consistent with a provenance sentence \"Barack Obama, president of Kenya, visited the U.S. for talks\" even though the sentence falsely claims that Obama is president of Kenya. . The belief is considered repaired if the relation extracted by the IE system was not supported by the text, but when replaced by another relation that is supported by the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency and Repair",
"sec_num": "2.1"
},
{
"text": "We use three datasets: TAC 2015, TAC 2017, and a novel dataset we call TACRED-KG. All datasets use actual output from real IE systems. Each dataset is split into train/dev/test splits: in Table 2 (in the appendix) we show the size of each split, in terms of the number of provenance-backed beliefs.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "TAC 2015 and 2017. These include the output of 70+ IE systems, from the TAC 2015 and TAC 2017 shared tasks, with belief triples supported by up to four provenance sentences. Each belief was evaluated by an LDC expert (Ellis, 2015a) . We used these LDC judgments as the consistency labels for our experiments. For TAC 2015, 27% of the 34k beliefs are judged consistent; for TAC 2017, 36% of the 57k beliefs are judged consistent.",
"cite_spans": [
{
"start": 217,
"end": 231,
"text": "(Ellis, 2015a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "These TAC datasets do not, however, contain information on possible corrections when the belief is inconsistent. To overcome this limitation, we used negative sampling on the consistent beliefs with their provenance to create an inconsistent pair. We first selected an entity and then identified a set of relations that apply to the entity. We randomly chose one of the relations with uniform probability and shuffled it with another relation, keeping the provenance the same. For example, given two consistent beliefs Barack_Obama,president_of,US, and Barack_Obama,school_attended,Harvard, we swap president_of with school_attended, keeping the provenance unchanged. This yields inconsistent beliefs associated with corresponding provenance and the correct labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "TACRED-KG. The TACRED-KG dataset is a novel adaptation from the existing TACRED (Zhang et al., 2017) relation extraction dataset. TA-CRED is focused on providing data for typical relation extraction systems. As such, it contains 4-tuples (subject, object, provenance sentence, correct relation), where relation extraction systems are expected to predict that relation for the given subject-object pair and the sentence. We turn this relation extraction dataset into a KG-focused dataset. We then used a relation extraction positionaware attention RNN model (Zhang et al., 2017) system on the TACRED data to produce 5-tuples (subject, object, provenance sentence, correct relation, predicted relation). From these we created a provenance-backed KG dataset, TACRED-KG, as (subject, predicted relation, object, provenance sentence). In TACRED-KG, we treat the gold standard relation as the repair label. We consider beliefs consistent when the predicted and gold standard relations are the same. Observational Comparison. We note some qualitative observations about these datasets, though traceable back to how each dataset was constructed. First, TAC 2015 and TAC 2017 contain more provenance examples per relation than TACRED-KG. Second, because the provenance was provided by varied IE systems in TAC 2015/2017, the provenance may be the result of noisy extractions and matching: the provenance for TAC 2015/2017 is often noisier than TACRED-KG (e.g., portions of sentences vs. full sentences).",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 557,
"end": 577,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "3 What Errors Do IE Systems Make?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "We begin with an analysis of errors in the beliefs from actual IE systems. This analysis is enlightening, as each system used different approaches and types of resources to extract potential facts. We sampled 600 beliefs and their provenance text each from the training portions of three different knowledge graph datasets: TAC 2015, TAC 2017, and TACRED-KG. As described in \u00a72.2, they all contain provenance-backed beliefs that were extracted from actual IE systems (but ones which are generally not available for subsequent downstream examination). All of the beliefs are represented as a relation between two arguments. The authors manually assessed these according to available and published guidelines (Ellis, 2015a,b; Dang, 2017) to understand the kinds of errors made by the IE systems. We identified four types of errors: the subject (first argument) not present in the provenance text; the object (second argument) not present in the provenance; an insufficiently supported relation between two present arguments; and relations that run afoul of formatting requirements, e.g., misformed dates. We show examples of these in Table 1. Our analysis, summarized in Fig. 2 , found that the most frequent error type is an incorrect relation, followed by missing subject, missing object and (at a trace level) formatting errors. Though it varied based on dataset, approximately two-thirds of the sampled belief-provenance pairs had errors. The prevalence of incorrect relations motivates the importance of the relation repair task. It should be noted that while TAC 2015 and 2017 have a number of instances of missing subjects and objects, this is not the case for TACRED-KG. This illustrates a fundamental difference in selecting provenance information manually vs. automatically, and one that we observe to be experimentally important ( \u00a75.3), between TAC 2015/2017 and TACRED-KG.",
"cite_spans": [
{
"start": 707,
"end": 723,
"text": "(Ellis, 2015a,b;",
"ref_id": null
},
{
"start": 724,
"end": 735,
"text": "Dang, 2017)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1132,
"end": 1140,
"text": "Table 1.",
"ref_id": "TABREF1"
},
{
"start": 1169,
"end": 1175,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2.2"
},
{
"text": "Our approach computes both the consistency of a belief b i and a \"repaired\" belief with respect to a given set of provenance sentences. We represent b i as a triple \u27e8subject i , predicate i , object i \u27e9 and the set of provenance sentences as S i,1 , S i,2 , ...S i,n . The system outputs two discrete predictions: (1) a binary one indicating whether the belief is consistent with the sentences, and (2) a categorical one sug- Figure 3 : Given a belief and a set of n provenance sentences, our framework determines its consistency and suggests a repair when if is deemed inconsistent. Our approach has three main modules: representation (4.1), combination (4.2), and feature learning and classification (4.3).",
"cite_spans": [],
"ref_spans": [
{
"start": 426,
"end": 434,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "gesting a repair. Fig. 3 illustrates our approach for representing and combining the beliefs and provenance sentences to jointly learn the two tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 24,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Our approach has three main steps: embedding a belief and its provenance sentences in a vector space ( \u00a74.1), combining/aggregating these representations ( \u00a74.2), and using the result for additional feature learning and classification ( \u00a74.3). We describe our loss objective in \u00a74.4. As we show, our framework can be thought of as generalizing high performing credibility models, such as DeClarE (Popat et al., 2018) or LSTM-text (Rashkin et al., 2017).",
"cite_spans": [
{
"start": 396,
"end": 416,
"text": "(Popat et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We process and tokenize a belief's arguments and relation. For example, the belief \u27e8Barack_Obama, per : president_of, United_States\u27e9 yields a subject span (\"Barack Obama\"), a relation span (\"president of\"), and an object span (\"United States\"). We input processed text through an embedding function f belief to get a single embedding b for the belief. Here, f belief could be average of pretrained word embeddings, or final hidden state obtained from a sequence model (LSTM or Bi-LSTM) or the embedding from a transformer model (e.g., BERT (Devlin et al., 2019) ). As we discuss in \u00a75.2, we experiment with all of these.",
"cite_spans": [
{
"start": 540,
"end": 561,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Belief & Provenance Representation",
"sec_num": "4.1"
},
{
"text": "We represent the provenance sentences at two granularities. The first is by representing each sentence separately. We get a representation s i for each provenance sentence via an embedding function f evidence that embeds and combines them into a single vector. We define f evidence similarly to f belief .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief & Provenance Representation",
"sec_num": "4.1"
},
{
"text": "The second level considers all sentences at the same time. We refer to this as blob-level processing (rather than paragraph-or document-level) since the provenance sentences may come from different documents and we cannot assume any syntactic continuity between sentences. We obtain a representation of the blob from f blob . In principle any method of distilling potentially disjoint text could be used here: we found TF-IDF to be effective, especially as multiple sentences of provenance selectively extracted from different sources could result in lengthy, but non-narratively coherent text (which can be problematic for transformer models).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief & Provenance Representation",
"sec_num": "4.1"
},
{
"text": "Given the belief and provenance representations, we compute their similarity \u03b1 i as the cosine of the angle between their embedded representations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "\u03b1 i = b T i s i ||b i ||\u2022||s i || .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "The intuition is that sentences that are more consistent with the belief will score higher than those which are less. Scoring is important, as each IE system may give multiple provenance sentences (e.g., TAC allowed four). The sentences can be correct and support the belief, or be poorly selected and unsupportive. Higher scores suggest the provenance is related to the belief and helps differentiate supportive from unsupportive provenance. We use the computed similarity scores to combine the provenance representations and take a weighted average as our final input, capturing the semantics of the belief and provenance, as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "x = 1 n i \u03b1 i \u2022 s i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "We pass the created representation x as the input to the feature learning module.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "Though our computation of \u03b1 i and x operate at the sentence-level, our approach can also be applied to individual word representations. For this word-level attention, we replace each sentence representation s i with a word representation w ij in our computation of \u03b1 i and x. While we experimented with this word-level attention we found the model had trouble learning, frequently classifying beliefs nearly all as consistent, or inconsistent with \"no repair.\" We note that a similarly effective word-level attention was provided in DeClarE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "We selected a similarity-based, rather than position-based, attention. Applying position-based attention, as Zhang et al. (2017) did on the TA-CRED dataset, assumes that provenance sentences contain an explicit mention of the subject and object. In our setting that explicitly is not the case (recall the prevalence of missing arguments in our datasets, c.f. Fig. 2 ). There is also an assumption that there is exactly one provenance sentence as opposed to TAC, where an IE system can select up to four provenance sentences without explicitly mentioning either the subject or object.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 365,
"text": "Fig. 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Belief and Provenance Combination",
"sec_num": "4.2"
},
{
"text": "Prior to classification we may learn a more targeted representation z by, e.g., passing the combined representation x into a multi-layer perception. If we do not, then the consistency and repair classifiers operate directly on z = x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "We noticed through development set experiments that while adding additional layers initially helped, using more than three layers marginally decreased performance. For a k-layer MLP we obtained the projections h (j) , for 1 \u2264 j \u2264 k, as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "h (j) = g W (j) h (j\u22121) + b (j) . h (0) = x indi-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "cates the input, W (j) and b (j) are each layer's learned weights and biases (respectively), and g is the activation function. Through dev set experimentation we set g to be ReLU (Glorot et al., 2011) . We found the MLP gave better performance ( \u00a75) and that it was parametrically and computationally efficient. We note that the effectiveness of an MLP was also noted by the two top systems from the Fake News Challenge (Hanselowski et al., 2018; Riedel et al., 2017b) for the verification task. On dev, we evaluated from one to five hidden layers and found the performance to be consistent after three layers, with the mean close the scores in Tables 3 and 4 and a maximum standard deviation across all the dataset and evaluation metrics to be less then one F1 point.",
"cite_spans": [
{
"start": 29,
"end": 32,
"text": "(j)",
"ref_id": null
},
{
"start": 179,
"end": 200,
"text": "(Glorot et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 420,
"end": 446,
"text": "(Hanselowski et al., 2018;",
"ref_id": "BIBREF8"
},
{
"start": 447,
"end": 468,
"text": "Riedel et al., 2017b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "In addition to the learned features learned h (k) , we experiment with a lexically-based skip connection, where the input from the previous layer skips a few layers and is connected to a deeper one. We found this to be effective when making use of \"blob\" level features, computed via f blob . We further found computing f blob as the TF-IDF vector of all provenance text to be especially effective ( \u00a75.5). When using this connection, we compute",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "z = h (k) , f blob (blob) . If this connection is not used, z = h (k) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "Classification. We use the final representation z as input to the consistency ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Learning and Classification",
"sec_num": "4.3"
},
{
"text": "We train the parameters using back propagation of both losses, L consistency and L repair , jointly:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Optimization",
"sec_num": "4.4"
},
{
"text": "L = L consistency (y c ,\u0177 c ) + L repair (y r ,\u0177 r ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Optimization",
"sec_num": "4.4"
},
{
"text": "Each subloss is a cross-entropy loss between the true (y c , y r ) and predicted (\u0177 c ,\u0177 r ) responses, weighted inversely proportional to the prevalence of the correct label. The tasks are not independent. In our formulation they share the same provenance and belief representations so learning both tasks jointly helps in learning these shared parameters. 1 While in this paper we present a joint loss objective, we note that we separately experimented with alternative, non-joint approaches to Eq. (1). However, in development we found they performed worse than the joint approach. First we evaluated pipelined approaches, e.g., where the repair classifier also considered the output of the credibility model, but found its performance to be inferior to the joint approach. Second, we also tried using the repair output as input to the credibility classifier, and found that it resulted in high recall with poor precision, with inconsistent instances being classified as consistent. The shared abstract representation of belief and provenance used in our formulation presented above allows fine tuning for both subtasks. We also experimented on dev with other types of weighting, such as a uniform weighting. However, the inversely proportional weighting scheme we describe in the main paper is what performed best on dev experiments.",
"cite_spans": [
{
"start": 358,
"end": 359,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Optimization",
"sec_num": "4.4"
},
{
"text": "A Generalizing Framework. We note that we can represent DeClarE by defining the belief encoder f belief as averaging word embeddings, a provenance encoder f evidence to be a Bi-LSTM, combining these representations with word level attention, and passing them to a two layer MLP without lexical skip connections. To achieve this specialization, we can optimize either L consistency or L repair . Representing LSTM-text is similar. This shows that our framework encompasses prior work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Optimization",
"sec_num": "4.4"
},
{
"text": "We centered our study around four questions, answered throughout \u00a75.3. (1) As our approach subsumes credibility models, can those credibility models also be used for the consistency and/or repair tasks ( \u00a75.3.1)? (2) What features and representations are important for the consistency and repair tasks ( \u00a75.3.2)? (3) How important is it to model the realized (sequential) order of words within the provenance sentences for our tasks ( \u00a75.3.3)? (4) What are the differences between relation repair and extraction ( \u00a75.3.4)? Table 2 provides statistics on the train/dev/test splits. On dev, we tuned hyper-parameters over all the models and datasets, using learning rates from {10 \u22121 ,...,10 \u22125 } by powers of 10, dropout (Srivastava et al., 2014) from {0.0, 0.2}, and L2 regularizing penalty values from {0.0, 0.1..., 0.0001} (powers of 10). We ran each model until convergence or for 20 epochs (whichever came first) with a batch size of 64.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 530,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluated the effect of each of the four major components mentioned below. We used Glove (Pennington et al., 2014) as pre-trained word embeddings, except for BERT models, where we used the uncased base model (Devlin et al., 2019) . Representations (Rep.): We evaluated three ways to represent beliefs and provenance text (compute f belief and f evidence ): Bag-of-Words (BoW) embedding which is the average of Glove embeddings, the final output from the LSTM and Bi-LSTM models, and the BERT representation output. While an average of embeddings may seem simple, this approach has empirically performed well on other tasks compared to more complicated models (Iyyer et al., 2015) .",
"cite_spans": [
{
"start": 92,
"end": 117,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 211,
"end": 232,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 662,
"end": 682,
"text": "(Iyyer et al., 2015)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Components",
"sec_num": "5.2"
},
{
"text": "Combining belief & provenance (Comb.): When beliefs and provenance are used, we considered similarity as sentence-level attention (\"Yes (S)\") as well as word-level attention (\"Yes (W)\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Components",
"sec_num": "5.2"
},
{
"text": "Feature Learning (Feat.): In our primary experiments to do further feature learning we used a three layer multi-layer perceptron (\"MLP\") to do further feature learning. We indicate no further feature learning with a value of \"None.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Components",
"sec_num": "5.2"
},
{
"text": "\"Blob\" Sparse Connection (\"Sparse\"): If used, we set f blob to compute either a TF-IDF or binarylexical vector based on the blob (concatenation of all sentences for a belief). This computed representation skips the feature learning component and is provided directly to the classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Components",
"sec_num": "5.2"
},
{
"text": "The overall test results across our three datasets are shown in Table 3 for the consistency task and Table 4 for the repair task. Each of the selected models was, prior to evaluation on the test set, chosen due to its performance on development data. The results are averaged across three runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 64,
"end": 109,
"text": "Table 3 for the consistency task and Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "We first examine and compare our proposed framework against two different strong performing credibility models. These external methods are our baselines and we indicate them in Tables 3 and 4 by \"\u2663\" (Popat et al., 2018) and \"\u2660\" (Rashkin et al., 2017) . We find they both perform poorly compared to other models, indicating that while both tasks learn similar functions the credibility models cannot be used \"as-is\" for consistency. This highlights the fact that the consistency task is sufficiently different from the existing credibility task.",
"cite_spans": [
{
"start": 199,
"end": 219,
"text": "(Popat et al., 2018)",
"ref_id": "BIBREF15"
},
{
"start": 228,
"end": 250,
"text": "(Rashkin et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 177,
"end": 191,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Can Credibility Models be Used?",
"sec_num": "5.3.1"
},
{
"text": "Moreover, in examining whether credibility models transfer to the repair task, word level attention with a Bi-LSTM sentence encoder, as in DeClarE Table 5 : Consistency and repair performance ablation study, averaged over three runs. \"Comb.\" is belief and provenance combination, and \"Skip\" is the use of skip connection. All use an MLP for feature learning. For space, we only consider TAC 2017 in these experiments. (Popat et al., 2018, \u2663) , performs poorly in the repair task too (with one exception on TACRED-KG). These results highlight differences in the credibility vs. consistency tasks, and the applicability of existing credibility models to both consistency and repair, suggesting that a dedicated framework and study such as ours is needed.",
"cite_spans": [
{
"start": 418,
"end": 441,
"text": "(Popat et al., 2018, \u2663)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Can Credibility Models be Used?",
"sec_num": "5.3.1"
},
{
"text": "Consistency: Both sentence attention and a TF-IDF sparse connection improve the overall F1 of our framework's embedding-based models. We noticed that precision and recall vary across the datasets due to their different characteristics. This can be seen with the two methods that rely only on the lexically-based sparse connections (the first two rows of Table 3) : while performance was strong on TACRED-KG consistency, it was quite poor on TAC 2015 and 2017. These latter two datasets have more provenance sentences per belief, and make",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Table 3)",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "What Representations are Effective?",
"sec_num": "5.3.2"
},
{
"text": "fewer assumptions about what must be contained in the provenance. Together, this results in greater lexical variety, which suggests that while non-neural lexical-based consistency approaches can be effective in settings with limited provenance, stronger approaches are needed for greater and more diverse provenance. Learning refined embeddings (rows 5 and 6) suggests that these pre-trained models are helpful in the task. BERT benefits from the less noisy provenance in TACRED-KG. However, similar or slightly better performance is achieved when simple word embeddings are used, especially for TAC 2015/2017, highlighting the difficulty of the consistency task with noisier provenance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What Representations are Effective?",
"sec_num": "5.3.2"
},
{
"text": "Repair: Perhaps surprisingly, an embedding model with a TF-IDF sparse connection yielded good performance. The sparse-based lexical features are most influential, as evident from when just TF-IDF or binary lexical features are used. Looking across the three datasets, we notice that a TF-IDF only model provides a surprisingly strong baseline, outperforming the existing credibility models in almost all cases. Using BoW embedding with sentence attention, MLP feature learning, and a TF-IDF sparse connection, we can surpass a sparseonly TF-IDF approach. The BERT-based representation, fine-tuned or not, performed nearly equally to a BoW embedding on the repair task, indicating both the effectiveness of its pre-trained model and highlighting the difficulty of this repair task. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What Representations are Effective?",
"sec_num": "5.3.2"
},
{
"text": "As indicated by Zhang et al. (2017) , the sentences in TACRED and TAC are long. Consistency and repair models must be able to handle that. Note that BoW representation methods do not consider word order, while LSTM, Bi-LSTM and BERT embeddings do. From Tables 3 and 4, we see that TF-IDF sparse features and a sentence level combination of the belief and provenance give the best performance on both tasks when using a BoW representation, as compared to an LSTM, Bi-LSTM with word attention, and BERT. This indicates that for consistency and repair, unordered lexical features can be sufficient to get better performance.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "Zhang et al. (2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "How Helpful Is Sequential Modeling?",
"sec_num": "5.3.3"
},
{
"text": "We further examine this in Table 5 , where due to space we focus on TAC 2017. Notice that while sequence-based encodings can improve some aspects (e.g., precision and F1 for consistency), there are not across-the-board improvements. We experimented with replacing the BoW embedding with a sentence-level Bi-LSTM representation. A Bi-LSTM representation with just attention and TF-IDF sparse features gives better consistency precision and F1 compared to BoW embedding approaches. However, the Bi-LSTM results in overall lower performance for repair. While the differences are not very large, they indicate that simple methods can outperform, or perform competitively with, sequential and autoencoding methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "How Helpful Is Sequential Modeling?",
"sec_num": "5.3.3"
},
{
"text": "While the repair task can be viewed as relation re-extraction, we examine the implications of this. Tables. 3 and 4 show a large performance drop for TACRED-KG vs. TAC 2015/2017. First, TA-CRED was created from a TAC dataset and modified and augmented by crowd-sourced workers. When the belief was found with abstract or generalized provenance, workers were shown a set of sentences containing the subject-object pairs and asked to pick the representative sentence which was most specific. Second, each sentence is guaranteed to include the subject and object mentions, which is not always true for TAC 2015 and 2017, where a significant number of TAC provenance sentences were missing one or both the subject and object mentions. This highlights some of the differences in the core assumptions made in the construction of a relation extraction dataset. Fig. 4 demonstrates our framework's performance on some examples from TAC 2015. The first example describes the case where the belief was consistent with the provenance information and there was no recommendation of an alternate relation. Depending on the provenance the fix may not be appropriate, as in the second example of per:title vs. per:religion where we believe an indicative word like \"Islamic\" influenced the repair prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 115,
"text": "Tables. 3 and 4",
"ref_id": "TABREF5"
},
{
"start": 854,
"end": 860,
"text": "Fig. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Relation Repair vs. Re-Extraction",
"sec_num": "5.3.4"
},
{
"text": "Our results show the strength of attention with lexical features. We further examine the impact of lexical features, using the first four rows of Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.5"
},
{
"text": "Lexical Impact on Consistency. From the first row of Table 5 , we see BoW embedding for both the belief and provenance results in low precision and recall. While adding attention does not help, using TF-IDF sparse features drastically improves performance. Meanwhile, removing sentencebased attention only has a small impact on performance. All together this indicates the provenance found by the IE system is more lexically systematic.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.5"
},
{
"text": "Lexical Impact on Repair. A similar trend is seen for the repair task: our combined representation with TF-IDF is better than relying only on embeddings. Combining belief and provenance sentences gets slightly better micro overall compared to macro. This affects the MRR score too. However, the best performance is achieved when all components are combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "5.5"
},
{
"text": "There has been research on determining the consistency of beliefs using either schemas or ensembles, but none that are language-based, do not require access to IE system details, or attempt to repair inconsistent facts. Our work addresses all these.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Studies",
"sec_num": "6"
},
{
"text": "Schema and Ensemble Based approaches: Previous work by Ojha and Talukdar (2017) and Pujara et al. (2013) determined the consistency of the extracted belief using a schema as the side information and coupling constraints to satisfy the schema's axioms. Rather than applying schemas, Yu et al. (2014) proposed an unsupervised method applying linguistic features to filter credible vs. non-credible belief. However, it required access to multiple IE systems with different configuration settings that extracted information from the same text corpus. Viswanathan et al. (2015) used a supervised approach to build a classifier from the confidence scores produced by multiple IE systems for the same belief. These are not standalone systems, as they assume the availability of multiple IE systems.",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "Ojha and Talukdar (2017)",
"ref_id": "BIBREF12"
},
{
"start": 84,
"end": 104,
"text": "Pujara et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 282,
"end": 298,
"text": "Yu et al. (2014)",
"ref_id": "BIBREF27"
},
{
"start": 547,
"end": 572,
"text": "Viswanathan et al. (2015)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Studies",
"sec_num": "6"
},
{
"text": "Language based approaches: The FEVER (Thorne et al., 2018) fact-checking study proposes a framework for credibility task and performs provenance-based classification without attempting to repair errors. This task has inspired a number of efforts (Yin and Roth, 2018, i.a.,) , including Ma et al. (2019) who tackle a problem similar to our consistency. Guo et al. (2022) outlines additional language-based approaches for consistency prediction (they term it \"verdict prediction\"). However, a crucial difference is that we aim to operate on KG tuple outputs as the belief (not sentences).",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 246,
"end": 273,
"text": "(Yin and Roth, 2018, i.a.,)",
"ref_id": null
},
{
"start": 286,
"end": 302,
"text": "Ma et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 352,
"end": 369,
"text": "Guo et al. (2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Studies",
"sec_num": "6"
},
{
"text": "Overall, our study differs from previous ones in two important ways. (1) We address the problem of determining consistency and potential corrections without access to an underlying semantic schema.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Studies",
"sec_num": "6"
},
{
"text": "(2) Our standalone approach treats the underlying IE systems as blackboxes and requires no access to the original IE systems or detailed system output containing confidence scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Studies",
"sec_num": "6"
},
{
"text": "We propose a task of refining the beliefs produced by a blackbox IE system that provides no access to or knowledge of its internal workings. First we analyze the types of errors made. Then we propose two subtasks: determining the consistency of an extracted belief and its provenance text, and suggesting a repair to fix the belief. We present a modular framework that can use a variety of representation, and learning techniques, and subsumes prior work. This framework provides effective techniques for the consistency and repair tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "See \u00a75 for discussion of alternative losses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would also like to thank the anonymous reviewers for their comments, questions, and suggestions. This material is based in part upon work supported by the National Science Foundation under Grant Nos. IIS-1940931, IIS-2024878, and DGE-2114892. Some experiments were conducted on the UMBC HPCF, supported by the National Science Foundation under Grant No. CNS-1920079.This material is also based on research that is in part supported by the Army Research Laboratory, Grant No. W911NF2120076, and by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The U.S.Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either express or implied, of the Air Force Research Laboratory (AFRL), DARPA, or the U.S. Government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Proceedings of the 10th Text Analysis Conference. NIST",
"authors": [],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoa Trang Dang, editor. 2017. Proceedings of the 10th Text Analysis Conference. NIST.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "TAC KBP 2015 assessment guidelines",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Ellis. 2015a. TAC KBP 2015 assessment guidelines. Technical report, Linguistic Data Consortium.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TAC KBP 2015 slot descriptions",
"authors": [
{
"first": "Joe",
"middle": [],
"last": "Ellis",
"suffix": ""
}
],
"year": 2015,
"venue": "Linguistic Data Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joe Ellis. 2015b. TAC KBP 2015 slot descriptions. Technical report, Linguistic Data Consortium.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Knowlife: a knowledge graph for health and life sciences",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Ernst",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Siu",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "30th International Conference on Data Engineering",
"volume": "",
"issue": "",
"pages": "1254--1257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Ernst, Cynthia Meng, Amy Siu, and Gerhard Weikum. 2014. Knowlife: a knowledge graph for health and life sciences. In 30th International Confer- ence on Data Engineering, pages 1254-1257. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "HLTCOE participation in TAC KBP 2017: Cold start TEDL and low-resource EDL",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Finin",
"suffix": ""
},
{
"first": "Dawn",
"middle": [],
"last": "Lawrie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Mayfield",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Mc-Namee",
"suffix": ""
},
{
"first": "Cash",
"middle": [],
"last": "Costello",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Text Analysis Conference (TAC2017). NIST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Finin, Dawn Lawrie, James Mayfield, Paul Mc- Namee, and Cash Costello. 2017. HLTCOE partic- ipation in TAC KBP 2017: Cold start TEDL and low-resource EDL. In Proceedings of the Text Analy- sis Conference (TAC2017). NIST.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Deep sparse rectifier neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 14th International Conference on Artificial Intelligence and Statistics",
"volume": "",
"issue": "",
"pages": "315--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, pages 315-323.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "10",
"issue": "",
"pages": "178--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijiang Guo, Michael Schlichtkrull, and Andreas Vla- chos. 2022. A Survey on Automated Fact-Checking. Transactions of the Association for Computational Linguistics, 10:178-206.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A retrospective analysis of the fake news challenge stance-detection task",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Pvs",
"middle": [],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Caspelherr",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1859--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A retrospective analysis of the fake news challenge stance-detection task. In Proceedings of the 27th International Con- ference on Computational Linguistics, pages 1859- 1874. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Deep unordered composition rivals syntactic methods for text classification",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Varun",
"middle": [],
"last": "Manjunatha",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1681--1691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered com- position rivals syntactic methods for text classifica- tion. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol- ume 1, pages 1681-1691.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentence-level evidence embedding for claim verification with hierarchical attention networks",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Ma, Wei Gao, Shafiq Joty, and Kam-Fai Wong. 2019. Sentence-level evidence embedding for claim verification with hierarchical attention networks. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neverending learning",
"authors": [
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hruschka",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Betteridge",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Dalvi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Kisiel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Lao",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mazaitis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nakashole",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Platanios",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Samadi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Settles",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wijaya",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Saparov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Greaves",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 29th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, J. Bet- teridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, J. Krishnamurthy, N. Lao, K. Mazaitis, T. Mohamed, N. Nakashole, E. Platanios, A. Ritter, M. Samadi, B. Settles, R. Wang, D. Wijaya, A. Gupta, X. Chen, A. Saparov, M. Greaves, and J. Welling. 2015. Never- ending learning. In Proceedings of the 29th AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "KGEval: Accuracy estimation of automatically constructed knowledge graphs",
"authors": [
{
"first": "Prakhar",
"middle": [],
"last": "Ojha",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2017,
"venue": "Conf. on Empirical Methods in Natural Language Processing. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prakhar Ojha and Partha Talukdar. 2017. KGEval: Accuracy estimation of automatically constructed knowledge graphs. In Conf. on Empirical Methods in Natural Language Processing. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Joint Models to Refine Knowledge Graphs",
"authors": [
{
"first": "Ankur",
"middle": [],
"last": "Padia",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur Padia. 2019. Joint Models to Refine Knowledge Graphs. Ph.D. thesis, University of Maryland, Balti- more County.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Conf. on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Conf. on Empirical Methods in Natu- ral Language Processing, pages 1532-1543. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "DeClarE: Debunking fake news and false claims using evidence-aware deep learning",
"authors": [
{
"first": "Kashyap",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "22--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kashyap Popat, Subhabrata Mukherjee, Andrew Yates, and Gerhard Weikum. 2018. DeClarE: Debunking fake news and false claims using evidence-aware deep learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 22-32. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Knowledge graph identification",
"authors": [
{
"first": "Jay",
"middle": [],
"last": "Pujara",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Miao",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2013,
"venue": "Int. Semantic Web Conf",
"volume": "",
"issue": "",
"pages": "542--557",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jay Pujara, Hui Miao, Lise Getoor, and William Co- hen. 2013. Knowledge graph identification. In Int. Semantic Web Conf., pages 542-557. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overview of the cold start knowlege base construction and slot filling tracks. Slides from the",
"authors": [
{
"first": "Shahzad",
"middle": [],
"last": "Rajput",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shahzad Rajput. 2017. Overview of the cold start knowl- ege base construction and slot filling tracks. Slides from the U.S. National Institute of Standards and Technology.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Truth of varying shades: Analyzing language in fake news and political fact-checking",
"authors": [
{
"first": "Eunsol",
"middle": [],
"last": "Hannah Rashkin",
"suffix": ""
},
{
"first": "Jin",
"middle": [
"Yea"
],
"last": "Choi",
"suffix": ""
},
{
"first": "Svitlana",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Volkova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2931--2937",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1317"
]
},
"num": null,
"urls": [],
"raw_text": "Hannah Rashkin, Eunsol Choi, Jin Yea Jang, Svitlana Volkova, and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fake news and po- litical fact-checking. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2931-2937. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A simple but tough-to-beat baseline for the fake news challenge stance detection task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Georgios",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1707.03264"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P Sp- ithourakis, and Sebastian Riedel. 2017a. A sim- ple but tough-to-beat baseline for the fake news challenge stance detection task. arXiv preprint arXiv:1707.03264.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A simple but tough-to-beat baseline for the Fake News Challenge stance detection task",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Georgios",
"middle": [
"P"
],
"last": "Spithourakis",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Riedel, Isabelle Augenstein, Georgios P. Sp- ithourakis, and Sebastian Riedel. 2017b. A simple but tough-to-beat baseline for the Fake News Chal- lenge stance detection task. CoRR, abs/1707.03264.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A simple distant supervision approach for the TAC-KBP slot filling task",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Tibshirani",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Angel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, David McClosky, Julie Tibshirani, John Bauer, Angel X Chang, Valentin I Spitkovsky, and Christopher D Manning. 2010. A simple dis- tant supervision approach for the TAC-KBP slot filling task. https://nlp.stanford.edu/pubs/kbp2010- slotfilling.pdf.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Christos Christodoulopoulos, and Arpit Mittal",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "FEVER: a large-scale dataset for fact extraction and verification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.05355"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Stacked ensembles of information extractors for knowledgebase population",
"authors": [
{
"first": "Vidhoon",
"middle": [],
"last": "Viswanathan",
"suffix": ""
},
{
"first": "Yinon",
"middle": [],
"last": "Nazneen Fatema Rajani",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Bentor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vidhoon Viswanathan, Nazneen Fatema Rajani, Yinon Bentor, and Raymond Mooney. 2015. Stacked en- sembles of information extractors for knowledge- base population. In ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "TwoWingOS: A two-wing optimization strategy for evidential claim verification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Dan Roth. 2018. TwoWingOS: A two-wing optimization strategy for evidential claim verification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The wisdom of minority: Unsupervised slot filling validation based on multi-dimensional truth-finding",
"authors": [
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Hongzhao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Cassidy",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Chi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Zhi",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Clare",
"middle": [],
"last": "Voss",
"suffix": ""
},
{
"first": "Malik",
"middle": [],
"last": "Magdon-Ismail",
"suffix": ""
}
],
"year": 2014,
"venue": "25th Int. Conf. on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1567--1578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dian Yu, Hongzhao Huang, Taylor Cassidy, Heng Ji, Chi Wang, Shi Zhi, Jiawei Han, Clare Voss, and Malik Magdon-Ismail. 2014. The wisdom of mi- nority: Unsupervised slot filling validation based on multi-dimensional truth-finding. In 25th Int. Conf. on Computational Linguistics, pages 1567-1578.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Position-aware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "35--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 35-45.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Error categorization of 600 beliefs extracted by IE systems on three datasets. Multiple categories can apply as beliefs can have incorrect relations and incomplete provenance.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "(\u0177 c = sigmoid (W c z + b c )) and repair classifiers (\u0177 r = softmax (W r z + b r )). The parameters W c and W r have sizes 1 \u00d7 (d tf-idf + d hidden ) and d relations \u00d7 (d tf-idf + d hidden ), respectively. Here d tf-idf , d hidden , and d relations are the dimension of the TF-IDF vector, hidden vector and number of relations considered by the IE systems.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Marty Walsh; org:city_of_headquarters; Neighborhood House Charter School Summary: (\u2713, fixed) Human(C): No; Predicted(C): No; Human(R): org:founded_by; Predicted(R): org:founded_by Provenance: Walsh was a founding board member of Dorchester's Neighborhood House Charter School, and makes clear that he would support lifting the cap on charters in the city, something that hardly wins him the favor of the Boston Teachers Union. Belief: Alan M. Dershowitz; per:title; professor Summary: (\u2717, incorrect_fixed) Human(C): Yes; Predicted(C): No; Human(R): per:title; Predicted(R): per:religion Provenance: Harvard Law professor Alan Dershowitz said Sunday that the Obama administration was naive and had possibly made a \"cataclysmic error of gigantic proportions\" in its deal to ease sanctions on Iran in exchange for an opening up of the Islamic Republic s nuclear program.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Examples of our model's predictions on the TAC 2015 datasets. Human: gold standard label, Predicted: our model's label, C: Consistency, R: Repair, Human(C): Human Consistency label, and Predicted(C): Predicted consistency label. Similarly for repair. Summary indicates overall prediction analysis of example. (\u2713, fixed) means consistency correctly predicted and incorrect belief was fixed.",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Examples for each of the four identified error categories from the TAC 2015 dataset."
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Dataset statistics, in the number of provenancebacked beliefs, for the train/dev/test splits per dataset."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>f belief</td><td>f evidence</td><td colspan=\"2\">Comb. Feat. Sparse</td><td>P</td><td>TACRED-KG R F1</td><td>P</td><td>TAC-17 R</td><td>F1</td><td>P</td><td>TAC-15 R</td><td>F1</td></tr><tr><td>None</td><td>None</td><td>No</td><td>None Binary</td><td colspan=\"4\">63.96 83.46 72.42 19.65 5.29</td><td>8.34</td><td colspan=\"2\">28.08 0.81</td><td>1.58</td></tr><tr><td colspan=\"4\">None None TF-None None No \u2660 LSTM No MLP No</td><td colspan=\"2\">42.59</td><td/><td/><td/><td/><td/></tr></table>",
"text": "IDF 63.95 83.24 72.33 57.58 30.66 14.05 22.68 15.08 18.12 66.66 51.98 52.05 30.76 27.78 17.01 9.21 11.95 BoW \u2663 Bi-LSTM Yes (W) MLP No 42.59 66.66 51.98 37.31 52.44 43.54 31.17 36.55 33.65 BERT BERT Yes (S) MLP TF-IDF 66.42 76.26 69.99 48.10 88.56 62.34 51.70 59.69 55.40 BoW BoW Yes (S) MLP TF-IDF 65.99 64.14 65.05 48.09 98.03 63.17 50.83 65.22 57.13"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>f belief</td><td>f evidence</td><td colspan=\"2\">Comb. Feat. Sparse</td><td colspan=\"9\">TACRED-KG Macro Micro MRR Macro Micro MRR Macro Micro MRR TAC-2017 TAC-2015</td></tr><tr><td>None</td><td>None</td><td>No</td><td>None Binary</td><td>2.16</td><td>41.65</td><td>0.83</td><td>44.86</td><td>53.10</td><td>0.83</td><td>22.78</td><td>16.50</td><td>0.19</td></tr><tr><td>None</td><td>None</td><td>No</td><td>None TF-IDF</td><td>14.50</td><td>43.48</td><td>0.83</td><td>75.49</td><td>76.80</td><td>0.76</td><td>76.35</td><td>77.57</td><td>0.76</td></tr><tr><td colspan=\"4\">None BoW \u2663 Bi-LSTM Yes (W) MLP No MLP \u2660 LSTM BERT BERT Yes (S) MLP TF-IDF No No</td><td>1.87 1.24 4.10</td><td>78.56 52.39 7.72</td><td>0.82 0.8 0.28</td><td>3.05 1.04 72.17</td><td>33.04 32.02 81.85</td><td>0.53 0.43 0.89</td><td>1.46 1.46 54.91</td><td>61.30 61.30 58.61</td><td>0.68 0.66 0.69</td></tr><tr><td>BoW</td><td>BoW</td><td colspan=\"2\">Yes (S) MLP TF-IDF</td><td>7.22</td><td>64.43</td><td>0.74</td><td>76.39</td><td>85.33</td><td>0.91</td><td>65.76</td><td>78.02</td><td>0.86</td></tr></table>",
"text": "Consistency performance (average of 3 runs) from our models (see \u00a75.2 for a detailed explanation of the columns). We indicate existing credibility models with \u2663(Popat et al., 2018) and\u2660 (Rashkin et al., 2017). BoW refers to bag of GLoVE embeddings."
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"3\">fbelief and Comb. Sparse fevidence</td><td>P</td><td>Consistency R</td><td>F1</td><td colspan=\"2\">Repair Macro Micro MRR</td></tr><tr><td>BoW</td><td>No</td><td>No</td><td colspan=\"3\">12.01 33.33 17.65</td><td>0.92</td><td>22.08</td><td>0.38</td></tr><tr><td>BoW</td><td>Yes (S)</td><td>No</td><td colspan=\"3\">12.01 33.33 17.65</td><td>0.89</td><td>21.16</td><td>0.34</td></tr><tr><td>BoW</td><td>No</td><td colspan=\"4\">TF-IDF 47.98 90.75 62.77</td><td>75.71</td><td>85.24</td><td>0.90</td></tr><tr><td colspan=\"6\">BoW Bi-LSTM Yes (S) TF-IDF Yes (S) TF-IDF 48.09 92.03 63.17 59 87.71 70.53</td><td>76.39 75.76</td><td>85.33 83.86</td><td>0.91 0.89</td></tr><tr><td>BERT</td><td colspan=\"5\">Yes (S) TF-IDF 48.11 91.47 63.06</td><td>76.30</td><td>85.25</td><td>0.91</td></tr></table>",
"text": "Repair Performance (averaged over 3 runs) of models with abbreviations as inTable 3."
}
}
}
}