ACL-OCL / Base_JSON /prefixF /json /fever /2020.fever-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:42:36.503280Z"
},
"title": "Simple Compounded-Label Training for Fact Extraction and Verification",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Lisa",
"middle": [],
"last": "Bauer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "UNC Chapel Hill",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic fact checking is an important task motivated by the need for detecting and preventing the spread of misinformation across the web. The recently released FEVER challenge provides a benchmark task that assesses systems' capability for both the retrieval of required evidence and the identification of authentic claims. Previous approaches share a similar pipeline training paradigm that decomposes the task into three subtasks, with each component built and trained separately. Although achieving acceptable scores, these methods induce difficulty for practical application development due to unnecessary complexity and expensive computation. In this paper, we explore the potential of simplifying the system design and reducing training computation by proposing a joint training setup in which a single sequence matching model is trained with compounded labels that give supervision for both sentence selection and claim verification subtasks, eliminating the duplicate computation that occurs when models are designed and trained separately. Empirical results on FEVER indicate that our method: (1) outperforms the typical multi-task learning approach, and (2) gets comparable results to top performing systems with a much simpler training setup and less training computation (in terms of the amount of data consumed and the number of model parameters), facilitating future works on the automatic fact checking task and its practical usage.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic fact checking is an important task motivated by the need for detecting and preventing the spread of misinformation across the web. The recently released FEVER challenge provides a benchmark task that assesses systems' capability for both the retrieval of required evidence and the identification of authentic claims. Previous approaches share a similar pipeline training paradigm that decomposes the task into three subtasks, with each component built and trained separately. Although achieving acceptable scores, these methods induce difficulty for practical application development due to unnecessary complexity and expensive computation. In this paper, we explore the potential of simplifying the system design and reducing training computation by proposing a joint training setup in which a single sequence matching model is trained with compounded labels that give supervision for both sentence selection and claim verification subtasks, eliminating the duplicate computation that occurs when models are designed and trained separately. Empirical results on FEVER indicate that our method: (1) outperforms the typical multi-task learning approach, and (2) gets comparable results to top performing systems with a much simpler training setup and less training computation (in terms of the amount of data consumed and the number of model parameters), facilitating future works on the automatic fact checking task and its practical usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The increasing concern with the spread of misinformation has motivated research regarding automatic fact checking datasets and systems (Pomerleau and Rao, 2017; Hanselowski et al., 2018a; Bast et al., 2017; P\u00e9rez-Rosas et al., 2018; Zhou et al., 2019; Vlachos and Riedel, 2014; Wang, 2017; Our code will be publicly available on our webpage. 2019a,b). The Fact Extraction and VERification (FEVER) dataset (Thorne et al., 2018a) is the most recent large-scale dataset that enables the development of data-driven neural approaches to the automatic fact checking task. Additionally, the FEVER Shared Task (Thorne et al., 2018b) introduced a benchmark, the first of this kind, that is capable of evaluating both evidence retrieval and claim verification.",
"cite_spans": [
{
"start": 135,
"end": 160,
"text": "(Pomerleau and Rao, 2017;",
"ref_id": "BIBREF20"
},
{
"start": 161,
"end": 187,
"text": "Hanselowski et al., 2018a;",
"ref_id": "BIBREF9"
},
{
"start": 188,
"end": 206,
"text": "Bast et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 207,
"end": 232,
"text": "P\u00e9rez-Rosas et al., 2018;",
"ref_id": "BIBREF19"
},
{
"start": 233,
"end": 251,
"text": "Zhou et al., 2019;",
"ref_id": "BIBREF33"
},
{
"start": 252,
"end": 277,
"text": "Vlachos and Riedel, 2014;",
"ref_id": "BIBREF27"
},
{
"start": 278,
"end": 289,
"text": "Wang, 2017;",
"ref_id": "BIBREF28"
},
{
"start": 405,
"end": 427,
"text": "(Thorne et al., 2018a)",
"ref_id": "BIBREF25"
},
{
"start": 602,
"end": 624,
"text": "(Thorne et al., 2018b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several top-ranked approaches on FEVER (Nie et al., 2019a; Yoneda et al., 2018; Hanselowski et al., 2018b) decompose the task into 3 subtasks: document retrieval, sentence selection, and claim verification, and follow a similar pipeline training setup where sub-components are developed and trained sequentially. Although achieving higher scores on benchmarks, pipeline training is timeconsuming and imposes difficulty for fast application development since downstream training relies on data provided by a fully-converged upstream component. The impossibility of parallelization also causes data-inefficiency as training the same input sentence for both sentence selection and claim verification requires twice the computation, whereas humans can learn the task of sentence selection and claim verification jointly.",
"cite_spans": [
{
"start": 39,
"end": 58,
"text": "(Nie et al., 2019a;",
"ref_id": "BIBREF16"
},
{
"start": 59,
"end": 79,
"text": "Yoneda et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 80,
"end": 106,
"text": "Hanselowski et al., 2018b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we simplify the training procedure and increase training efficiency for sentence selection and claim verification by merging redundant components and computation that exist when training the two tasks separately. We propose a joint training setup in which sentence selection and claim verification are tackled by a single neural sequence matching model. This model is trained with a compounded label space in which for a given claim, an input sentence that is labeled as \"NON-SELECT\" for sentence selection module training will also be labeled as \"NOTENOUGHINFO\" for claim verification module training. Similarly, input evidence that is labeled as \"SUPPORTS\" or \"REFUTES\" for claim verification module training will also be labeled as \"SELECT\" for sentence selection module training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To validate our new setup, we compare with the previous pipeline setup and a multi-task learning setup which trains the two tasks alternately. Fig. 1 illustrates differences among these three setups.",
"cite_spans": [],
"ref_spans": [
{
"start": 143,
"end": 149,
"text": "Fig. 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results indicate that: our method (1) outperforms the multi-task learning setup, and (2) yields comparable results with a top performing pipelinetrained system while consuming less than half the number of data points, reducing the parameter size by one-third, and converging to a functional state much faster than the pipeline-trained system. We argue that the aforementioned design simplification and training acceleration are valuable especially during time-sensitive application development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many of the top performing FEVER 1.0 systems, all achieving greater than 60% FEVER score on the respective leaderboard (Nie et al., 2019a; Yoneda et al., 2018; Hanselowski et al., 2018b) , share the same pipeline training schema in which document retrieval, sentence selection, and claim verification are all trained separately.",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "(Nie et al., 2019a;",
"ref_id": "BIBREF16"
},
{
"start": 139,
"end": 159,
"text": "Yoneda et al., 2018;",
"ref_id": "BIBREF31"
},
{
"start": 160,
"end": 186,
"text": "Hanselowski et al., 2018b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous FEVER Systems",
"sec_num": "2.1"
},
{
"text": "While Nie et al. (2019a) proposed formalizing sentence selection and claim verification as a similar problem, sentence selection and claim verification are still trained separately on the task, which contrasts with our setup. Additionally, Yin and Roth (2018) proposed a hierarchical neural model to tackle both sentence selection and claim verification at the same time, but did not induce computational savings as in our setup.",
"cite_spans": [
{
"start": 6,
"end": 24,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
},
{
"start": 240,
"end": 259,
"text": "Yin and Roth (2018)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous FEVER Systems",
"sec_num": "2.1"
},
{
"text": "Neural networks have been successfully applied to information retrieval tasks in Natural Language Processing (Huang et al., 2013; Guo et al., 2016; Mitra et al., 2017; Dehghani et al., 2017; Qi et al., 2019; Nie et al., 2019b ) with a focus on relevant retrieval. Information retrieval is generally a relevance-matching task whereas claim verification is a more semantics-intensive task. We consider using a single semantics-focused model to conduct both sentence retrieval and claim verification.",
"cite_spans": [
{
"start": 109,
"end": 129,
"text": "(Huang et al., 2013;",
"ref_id": "BIBREF12"
},
{
"start": 130,
"end": 147,
"text": "Guo et al., 2016;",
"ref_id": "BIBREF8"
},
{
"start": 148,
"end": 167,
"text": "Mitra et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 168,
"end": 190,
"text": "Dehghani et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 191,
"end": 207,
"text": "Qi et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 208,
"end": 225,
"text": "Nie et al., 2019b",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Retrieval",
"sec_num": "2.2"
},
{
"text": "Natural Language Inference (NLI) requires a system to classify the logical relationship between two sentences in which one is the premise and one is the hypothesis. This classifier decides whether the relationship is entailment, contradiction, or neutral. Several large-scale datasets have been created for this purpose, including the Stanford Natural Language Inference Corpus (Bowman et al., 2015) and the Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) . This task can be formalized as a semantic sequence matching task, which bears resemblance to both the sentence retrieval and claim verification tasks.",
"cite_spans": [
{
"start": 378,
"end": 399,
"text": "(Bowman et al., 2015)",
"ref_id": "BIBREF3"
},
{
"start": 454,
"end": 477,
"text": "(Williams et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Natural Language Inference",
"sec_num": "2.3"
},
{
"text": "Multi-task learning (MTL) (Caruana, 1997) has been successfully used to merge Natural Language Processing tasks (Luong et al., 2016; Hashimoto et al., 2017; Dong et al., 2015) for improved performance. Parameter sharing, in particular sharing of certain structures such as label spaces, has been used widely in several NLP tasks for this purpose (Liu et al., 2017; S\u00f8gaard and Goldberg, 2016) . Zhao et al. 2018used a multi-task learning setup for FEVER that shared certain layers between sentence selection and claim verification modules. Augenstein et al. (2018) used shared label spaces in MTL for sequence classification. Following this work, Augenstein et al. (2019) used shared label spaces for automatic fact checking. However, the labels involved in this work were limited to claim verification labels only, and did not incorporate sentence selection as we do in this paper.",
"cite_spans": [
{
"start": 26,
"end": 41,
"text": "(Caruana, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 112,
"end": 132,
"text": "(Luong et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 133,
"end": 156,
"text": "Hashimoto et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 157,
"end": 175,
"text": "Dong et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 346,
"end": 364,
"text": "(Liu et al., 2017;",
"ref_id": "BIBREF13"
},
{
"start": 365,
"end": 392,
"text": "S\u00f8gaard and Goldberg, 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Task Learning",
"sec_num": "2.4"
},
{
"text": "In addition to the FEVER shared task, other recent work in fake news detection has focused on several aspects of data collection and statement verification. Shu et al. (2019b) looked into the role of social context in fake news detection. Additionally, Shu et al. (2019a) also explored creating explainable fake news detection.",
"cite_spans": [
{
"start": 157,
"end": 175,
"text": "Shu et al. (2019b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fake News Detection",
"sec_num": "2.5"
},
{
"text": "Sentence selection and claim verification can be easily structured as the same sequence matching problem in which the input is a pair of textual sequences and the output is a semantic relationship label for the pair. Nie et al. (2019a) Figure 1 : Different training setups. In the pipeline setup, sentence selection and claim verification models are trained separately. In the multi-task setup, the two tasks are treated separately, but use a single model. In the compounded-label training setup, the training is simplified to a single task by mixing the data of the two tasks and allowing controlled supervision between the two tasks. S, R, NEI, SL, and NSL represent \"SUPPORTS\", \"REFUTES\", \"NOTENOUGHINFO\", \"SELECT\", and \"NON-SELECT\", respectively.",
"cite_spans": [
{
"start": 217,
"end": 235,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 236,
"end": 244,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sequence Matching Model",
"sec_num": "3.1"
},
{
"text": "the same architecture, the neural semantic matching network (NSMN), on the two tasks and showed it was effective on both. Thus, we use the same NSMN model with a modified output layer in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Matching Model",
"sec_num": "3.1"
},
{
"text": "For convenience, we give a description similar to the original paper (Nie et al., 2019a) about the model below.",
"cite_spans": [
{
"start": 69,
"end": 88,
"text": "(Nie et al., 2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "Encoding Layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "U = BiLSTM e (U) \u2208 R d 1 \u00d7n (1) H = BiLSTM e (H) \u2208 R d 1 \u00d7m (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where U \u2208 R d 0 \u00d7n and H \u2208 R d 0 \u00d7m are the two input sequences, d 0 and d 1 are input and output dimensions, and n and m are lengths of the two sequences. Alignment Layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "A =\u016a H \u2208 R n\u00d7m (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where an element in A [i,j] indicates the alignment score between i-th token in U and j-th token in H. Aligned sequences are computed as:",
"cite_spans": [
{
"start": 22,
"end": 27,
"text": "[i,j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "U =H \u2022 Softmax col (A ) \u2208 R d 1 \u00d7n (4) H =\u016a \u2022 Softmax col (A) \u2208 R d 1 \u00d7m (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where Softmax col is column-wise softmax,\u0168 is the aligned representation fromH to\u016a and vice versa forH. The aligned and encoded representations are combined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "F = f ([\u016a,\u0168,\u016a \u2212\u0168,\u016a \u2022\u0168]) \u2208 R d 2 \u00d7n (6) G = f ([H,H,H \u2212H,H \u2022H]) \u2208 R d 2 \u00d7m (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where f is one fully-connected layer with a rectifier as an activation function and \u2022 denotes elementwise multiplication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "Matching Layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "R = BiLSTM m ([F, U * ]) \u2208 R d 3 \u00d7n (8) S = BiLSTM m ([G, H * ]) \u2208 R d 3 \u00d7m (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where U * and H * are sub-channels of the input U and H without GloVe, provided to the matching layer via a shortcut connection. Output Layer:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r = Maxpool row (R) \u2208 R d 3 (10) s = Maxpool row (S) \u2208 R d 3 (11) h(r, s, |r \u2212 s|, r \u2022 s) = m",
"eq_num": "(12)"
}
],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "where function h denotes two fully-connected layers with a rectifier being applied on the output of the first layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Semantic Matching Network (NSMN)",
"sec_num": "3.2"
},
{
"text": "We propose the following compounded-label output layer for simpler, more efficient training. Given the input pair x i , the NSMN model is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m = NSMN(x i )",
"eq_num": "(13)"
}
],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "where m \u2208 R 4 is the output vector of NSMN in which the first three elements correspond to claim verification and the last element to sentence selection. Then, the probabilities are calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y cv = softmax(m [0:3] )",
"eq_num": "(14)"
}
],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "y ss = sigmoid(m 3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "where m [0:3] denotes the first three elements of m and y cv \u2208 R 3 denotes the probability of predicting the relation between the input and claim as \"SUPPORTS\", \"REFUTES\", or \"NOTENOUGHINFO\", while m 3 denotes the fourth element of m and y ss \u2208 R indicates the probability of choosing the input as evidence for the claim. This allows us to transfer the model's outputs to predictions in a compact way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Output Layer",
"sec_num": "3.3"
},
{
"text": "In order to simplify the training procedure and increase data efficiency, we introduce compoundedlabel training. Consider the model output vector:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = \uf8ee \uf8f0 y cv y ss 1 \u2212 y ss \uf8f9 \uf8fb",
"eq_num": "(16)"
}
],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "where\u0177 i \u2208 R 5 is the concatenation of y cv and [y ss , 1 \u2212 y ss ] . To optimize the model, we use the entropy objective function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J = \u2212y i \u2022 log(\u0177 i )",
"eq_num": "(17)"
}
],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "In a typical classification setup, the ground truth label embedding y i is a one-hot column vector chosen from an identity matrix, where the dimension equals the total number of categories. However, our compounded-label embedding is structured as the matrix with some supervision provided in the zero-area of one-hot embeddings shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 \u03bb 2 \u03bb 1 \u03bb 1 0 1 0 0 0 0 0 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"eq_num": "(18)"
}
],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "The first 3 columns are label embeddings for \"SUPPORTS\", \"REFUTES\", and \"NOTENOUGHINFO\" in verification and the last 2 columns are the label embeddings for \"SELECT\" and \"NON-SELECT\" in sentence selection, resp. Thus, for a given claim, \"SUPPORTS\" and \"REFUTES\" evidence will also give supervision as positive examples to sentence selection weighted by \u03bb 1 and \"NON-SELECT\" sentences will also give supervision as \"NOTENOUGHINFO\" evidence to claim verification weighted by \u03bb 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label Training",
"sec_num": "3.4"
},
{
"text": "We focused on comparing the following five NSMN 1 training setups for sentence selection and claim verification. We obtain upstream document retrieval data using the method in Nie et al. (2019a) . Training details are in the appendix.",
"cite_spans": [
{
"start": 176,
"end": 194,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Pip. Mul. Mix. Cmp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Mix. in Same Batch Supv. for Other Task Table 1 : Properties of different training setups. \"Pip.\", \"Mtl.\", \"Mix.\", \"Cmp.\" stand for pipeline, multi-task learning, direct mixing, and compounded-label training setup, respectively. 'Supv.'=Supervision. Table 2 : Final performance, evidence recall, model size, and data consumption (until convergence) for all 5 setups. We measure data consumption as the amount of data the model used for parameter updating, e.g., 10K updates w/ batch size 32 consumes 320K data. 'D.M.'=direct mixing, 'C.L.'=compounded-label, 'MTL.'=multi-task learning, 'Rdc-Pip.'=pipeline w/ reduced size, 'Pip.'=pipeline (Nie et al., 2019a) .",
"cite_spans": [
{
"start": 645,
"end": 664,
"text": "(Nie et al., 2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 35,
"end": 53,
"text": "Task Table 1",
"ref_id": null
},
{
"start": 256,
"end": 263,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "Pipeline: We train separate sentence selection and claim verification models as in Nie et al. (2019a) .",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "Multi-task Learning: We follow the neural multitask learning setup called alternate training (Dong et al., 2015; Luong et al., 2016; Hashimoto et al., 2017) , where each batch contains examples from a single task only. We build a single NSMN model for both selection and verification and alternatively optimize the two tasks.",
"cite_spans": [
{
"start": 93,
"end": 112,
"text": "(Dong et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 113,
"end": 132,
"text": "Luong et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 133,
"end": 156,
"text": "Hashimoto et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "Direct Mixing: We simply blend the input examples of the two tasks into the same batch, providing additional simplicity over our multi-task learning setup in which batches need to be task-exclusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "Compounded-Label Training: We also blend the inputs of the two tasks, but counter to direct mixing, we use the compounded-label embedding described in Sec. 3 for optimization and downsample the input examples to reduce training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "Reduced Pipeline: This is the same pipeline setup as described above, except that we reduce the model sizes for both sentence selection and verification such that the total model size is equal to all other setups that use only a single joint model. This experiment gives a fair comparison between each of the setups by canceling out the parametersize variance. Table 1 shows a comparison of the first four different setups. 5 Results and Analysis FEVER Score Performance: We observe from Table 2 that compounded-label training outperforms 2 both the multitask learning and direct mixing setups. We speculate that the performance gap is due to the fact that in the multi-task and direct mixing training setups, the same model is trained by separated and different supervisions of two tasks, resulting in oscillation and making it difficult to reach a better global minimum. However, in the compounded-label setup, training the model on one task always gives a subtly-controlled supervision on the other task. This not only applies natural regularization on the targeted task itself, but also pushes the model towards a better state for both tasks. Next, we also show that the compounded-label setup achieves a higher FEVER score than the reduced-pipeline setup (3rd row in Table 2 ), indicating its ability to model the two tasks jointly in a more compact and parameter-efficient way. Although the full pipeline setup gives a slightly higher FEVER score, the compounded-label setup has the advantage of reducing parameter size by one-third, requiring less than half the training computation, and improving the training efficiency (elaborated on in the following subsection). Finally, we also compare recall scores, since this is most related to the FEVER score, as validated by Nie et al. (2019a) . Efficiency: In Fig. 2 , we show the training effi-2 In Table 2 , the improvements of compounded-label over the first three entries are significant with p < 10 \u22125 while the improvement of full pipeline over compounded-label is significant with p < 0.05. Stat. significance was computed on bootstrap test with 100K iterations (Noreen, 1989; Efron and Tibshirani, 1994 ciency of different approaches by tracking performance with the number of data points consumed. 3 Parameter update settings are equal across all experiments and thus show an accurate depiction of the speedup independent of batch size, etc. For fair comparison, there is no FEVER score for the first 22 \u00d7 320K data points in the pipeline setup since these data points are consumed in the separate upstream sentence selection training. The compounded-label training setup exhibits a more stable training curve than the other setups during initial training, and reaches a 60%+ FEVER score after seeing only 1,280K data points. This indicates that the compounded-label setting allows the model to quickly reach a stable and functional state. This is valuable for online learning on streaming data, where the model is trained with real-time human feedback. On the contrary, the performance of the multi-task learning and direct mixing setups fluctuates at a low level during initial training stages, which shows that optimization oscillation makes training difficult in these setups. Blind Test Results: In Table 3 we compare the two setups on the blind test set. Compounded Label achieved 61.65% FEVER score and 66.21% label score (LA) while the pipeline setup got 62.69% and 66.20% for FEVER score and LA, respectively. Since the upper bound is dependent on document retrieval quality, we report the upper bound of these scores as 92.42% following Nie et al. (2019a) . Our method was able to yield results comparable to the pipeline model on FEVER score and even higher results on label score, with simpler design, faster convergence and only two-thirds the number of parameters.",
"cite_spans": [
{
"start": 1777,
"end": 1795,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
},
{
"start": 2122,
"end": 2136,
"text": "(Noreen, 1989;",
"ref_id": "BIBREF18"
},
{
"start": 2137,
"end": 2163,
"text": "Efron and Tibshirani, 1994",
"ref_id": "BIBREF7"
},
{
"start": 2260,
"end": 2261,
"text": "3",
"ref_id": null
},
{
"start": 3609,
"end": 3627,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 361,
"end": 368,
"text": "Table 1",
"ref_id": null
},
{
"start": 488,
"end": 495,
"text": "Table 2",
"ref_id": null
},
{
"start": 1272,
"end": 1279,
"text": "Table 2",
"ref_id": null
},
{
"start": 1813,
"end": 1819,
"text": "Fig. 2",
"ref_id": "FIGREF0"
},
{
"start": 1853,
"end": 1860,
"text": "Table 2",
"ref_id": null
},
{
"start": 3266,
"end": 3273,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Shared Parameters",
"sec_num": null
},
{
"text": "We present a simple compounded-label setup for jointly training sentence selection and claim verification. This setup provides higher training efficiency and lower parameter size while still achieving comparable results to the pipeline approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We remove the external WordNet features from NSMN for simplicity and speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We measure the training efficiency based on the size of data consumed until convergence rather than training time or the full training size because it gives a fair measurement about how fast the model can reach a fully-functional state independent of computational resources and platforms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We observed a failure of convergence when we choose batch size as 32 in multi-task learning settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers for their helpful comments. This work was supported by DARPA MCS Grant N66001-19-2-4031, and awards from Google and Facebook. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The dimension of final NSMN output vector can be customized depending on the downstream task. In the pipeline setting, multi-task learning setting, and direct mixing setting, m = m + , m \u2212 for sentence selection, where m + \u2208 R is a scalar value indicating the score for selecting the current sentence as evidence and m \u2212 gives the score for discarding it. For claim verification, m = m s , m r , m n , where the elements of the vector denote the score for predicting the three labels, namely SUPPORTS, REFUTES, and NEI, respectively. However, in the compounded-label setting, m \u2208 R 4 and the model is optimized with a compact label embedding described in the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 NSMN Output Layer Modifications",
"sec_num": null
},
{
"text": "This section includes the training details for sentence selection and verification. We use the pageview method in Nie et al. (2019a) to obtain the same upstream document retrieval data for all of our four setups.Pipeline: In the pipeline and the reduced-sizepipeline setup, we use exactly the same training setup as in Nie et al. (2019a) for sentence selection and claim verification.Multi-task Learning: In this setup, we choose batch as 64 and use Adam optimizer with default initial parameters. The mixing ratio for sentence selection and claim verification is set to 1 thus the two tasks are both trained alternately every two batches. As in Nie et al. (2019a), we downsample the training data for the sentence selection task at the beginning of each epoch.",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
},
{
"start": 319,
"end": 337,
"text": "Nie et al. (2019a)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 Training Details",
"sec_num": null
},
{
"text": "We use a batch size of 64 and Adam optimizer with default settings. As our two subtasks contain different amounts of training data, we use the data size ratio as the task mixing ratio within each batch. We guarantee that each label is present at least once in each mini-batch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Mixing:",
"sec_num": null
},
{
"text": "We use a batch size of 32 and Adam optimizer with default settings. We downsample the negative examples for sentence selection with the probability of p (this is done at the beginning of every epoch) and randomly mix and shuffle the training data for both sentence selection and claim verification into one input set and train the single model with compounded-label as described in the paper. p is set to be 0.1 at the first epoch and 0.025 otherwise. \u03bb 1 and \u03bb 2 are set to be 1 and 0.5 respectively.Hyper-parameter Selection: In the experiments for multi-task learning, data mixing and compounded-label settings, the batch size is chosen from either 64 or 32 by optimizing final FEVER Score. 4 In multi-task learning, the mixing ratio of sentence selection to claim verification is tuned from {1, 2}. For the compounded-label setting, \u03bb 1 and \u03bb 2 are tuned from {1, 0.9} and {0.45, 0.5} respectively based on the intuition that supporting and refuting sentences can be also treated as positive evidence examples with high confidence while partially relevant sentences that cannot verify the claim can be treated as weakly related evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Compounded-Label:",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Multifc: A real-world multi-domain dataset for evidencebased fact checking of claims",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Christina",
"middle": [],
"last": "Lioma",
"suffix": ""
},
{
"first": "Dongsheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lucas",
"middle": [
"Chaves"
],
"last": "Lima",
"suffix": ""
},
{
"first": "Casper",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hansen",
"suffix": ""
},
{
"first": "Jakob",
"middle": [
"Grue"
],
"last": "Simonsen",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Chris- tian Hansen, and Jakob Grue Simonsen. 2019. Mul- tifc: A real-world multi-domain dataset for evidence- based fact checking of claims. In EMNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multi-task learning of pairwise sequence classification tasks over disparate label spaces",
"authors": [
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Isabelle Augenstein, Sebastian Ruder, and Anders S\u00f8gaard. 2018. Multi-task learning of pairwise sequence classification tasks over disparate label spaces. In NAACL-HLT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Overview of the triple scoring task at the wsdm cup 2017. WSDM Cup",
"authors": [
{
"first": "Hannah",
"middle": [],
"last": "Bast",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Buchhold",
"suffix": ""
},
{
"first": "Elmar",
"middle": [],
"last": "Haussmann",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hannah Bast, Bj\u00f6rn Buchhold, and Elmar Haussmann. 2017. Overview of the triple scoring task at the wsdm cup 2017. WSDM Cup.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Neural ranking models with weak supervision",
"authors": [
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Hamed",
"middle": [],
"last": "Zamani",
"suffix": ""
},
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Jaap",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "W Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2017,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. 2017. Neural rank- ing models with weak supervision. In SIGIR.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for mul- tiple language translation. In ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert J Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A deep relevance matching model for ad-hoc retrieval",
"authors": [
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yixing",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Qingyao",
"middle": [],
"last": "Ai",
"suffix": ""
},
{
"first": "W Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2016,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In CIKM.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A retrospective analysis of the fake news challenge stance-detection task",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "P",
"middle": [
"V S"
],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Caspelherr",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "Debanjan",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Avinesh P.V.S., Benjamin Schiller, Felix Caspelherr, Debanjan * Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018a. A retrospective analysis of the fake news challenge stance-detection task. In COLING.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-sentence textual entailment for claim verification",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zile",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Daniil",
"middle": [],
"last": "Sorokin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018b. Multi-sentence textual en- tailment for claim verification. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A joint many-task model: Growing a neural network for multiple nlp tasks",
"authors": [
{
"first": "Kazuma",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazuma Hashimoto, Yoshimasa Tsuruoka, Richard Socher, et al. 2017. A joint many-task model: Grow- ing a neural network for multiple nlp tasks. In EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning deep structured semantic models for web search using clickthrough data",
"authors": [
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2013,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In CIKM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adversarial multi-task learning for text classification",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial multi-task learning for text classifica- tion. ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multi-task sequence to sequence learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task se- quence to sequence learning. ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to match using local and distributed representations of text for web search",
"authors": [
{
"first": "Bhaskar",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Diaz",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Craswell",
"suffix": ""
}
],
"year": 2017,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In WWW.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Combining fact extraction and verification with neural semantic matching networks",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Haonan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Haonan Chen, and Mohit Bansal. 2019a. Combining fact extraction and verification with neu- ral semantic matching networks. AAAI.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Revealing the importance of semantic retrieval for machine reading at scale",
"authors": [
{
"first": "Yixin",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Songhe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yixin Nie, Songhe Wang, and Mohit Bansal. 2019b. Revealing the importance of semantic retrieval for machine reading at scale. In 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Computer-intensive methods for testing hypotheses",
"authors": [
{
"first": "",
"middle": [],
"last": "Eric W Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric W Noreen. 1989. Computer-intensive methods for testing hypotheses. Wiley New York.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic detection of fake news",
"authors": [
{
"first": "Ver\u00f3nica",
"middle": [],
"last": "P\u00e9rez-Rosas",
"suffix": ""
},
{
"first": "Bennett",
"middle": [],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Lefevre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2018,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Bennett Kleinberg, Alexandra Lefevre, and Rada Mihalcea. 2018. Automatic de- tection of fake news. In COLING.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Fake news challenge",
"authors": [
{
"first": "Rao",
"middle": [],
"last": "Pomerleau",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pomerleau and Rao. 2017. Fake news challenge.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Answering complex open-domain questions through iterative query generation",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Xiaowen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Mehr",
"suffix": ""
},
{
"first": "Zijian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Xiaowen Lin, Leo Mehr, Zijian Wang, and Christopher D Manning. 2019. Answering complex open-domain questions through iterative query gen- eration. EMNLP.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Beyond news contents: The role of social context for fake news detection",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Shu",
"suffix": ""
},
{
"first": "Suhang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Shu, Suhang Wang, and Huan Liu. 2019b. Beyond news contents: The role of social context for fake news detection. ACM.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Deep multitask learning with low level tasks supervised at lower layers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard and Yoav Goldberg. 2016. Deep multi- task learning with low level tasks supervised at lower layers. In ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "FEVER: a large-scale dataset for fact extraction and verification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018a. FEVER: a large-scale dataset for fact extraction and verification. In NAACL-HLT.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Oana",
"middle": [],
"last": "Cocarascu",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.10971"
]
},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2018b. The fact extraction and verification (fever) shared task. arXiv preprint arXiv:1811.10971.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Fact checking: Task definition and dataset construction",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL LACSS Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In ACL LACSS Workshop.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "liar, liar pants on fire\": A new benchmark dataset for fake news detection",
"authors": [
{
"first": "William",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Yang Wang. 2017. \"liar, liar pants on fire\": A new benchmark dataset for fake news detection. In ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A broad-coverage challenge corpus for sentence understanding through inference",
"authors": [
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Nangia",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In NAACL- HLT.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Twowingos: A twowing optimization strategy for evidential claim verification",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin and Dan Roth. 2018. Twowingos: A two- wing optimization strategy for evidential claim veri- fication. In EMNLP.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Ucl machine reading group: Four factor framework for fact finding (hexaf)",
"authors": [
{
"first": "Takuma",
"middle": [],
"last": "Yoneda",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuma Yoneda, Jeff Mitchell, Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Ucl machine reading group: Four factor framework for fact find- ing (hexaf). In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "An end-to-end multi-task learning model for fact checking",
"authors": [
{
"first": "Shuai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuai Zhao, Bo Cheng, Hao Yang, et al. 2018. An end-to-end multi-task learning model for fact check- ing. In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Gear: Graph-based evidence aggregating and reasoning for fact verification",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Changcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. Gear: Graph-based evidence aggregating and rea- soning for fact verification. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Model performance for different setups with respect to number of sequence pairs consumed. We only show performance until the consumption of the first 30\u00d7320K data points."
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Performance of systems on blind test results.",
"html": null
}
}
}
}