|
{ |
|
"paper_id": "K17-1034", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:07:32.638887Z" |
|
}, |
|
"title": "Zero-Shot Relation Extraction via Reading Comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": { |
|
"settlement": "Seattle", |
|
"region": "WA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": { |
|
"settlement": "Seattle", |
|
"region": "WA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Eunsol", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": { |
|
"settlement": "Seattle", |
|
"region": "WA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington", |
|
"location": { |
|
"settlement": "Seattle", |
|
"region": "WA" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relationextraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.", |
|
"pdf_parse": { |
|
"paper_id": "K17-1034", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relationextraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Relation extraction systems populate knowledge bases with facts from an unstructured text corpus. When the type of facts (relations) are predefined, one can use crowdsourcing (Liu et al., 2016) or distant supervision (Hoffmann et al., 2011) to collect examples and train an extraction model for each relation type. However, these approaches are incapable of extracting relations that were not specified in advance and observed during training. In this paper, we propose an alternative approach for relation extraction, which can potentially extract facts of new types that were neither specified nor observed a priori. Figure 1: Common knowledge-base relations defined by natural-language question templates.", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 193, |
|
"text": "(Liu et al., 2016)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 217, |
|
"end": 240, |
|
"text": "(Hoffmann et al., 2011)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We show that it is possible to reduce relation extraction to the problem of answering simple reading comprehension questions. We map each relation type R(x, y) to at least one parametrized natural-language question q x whose answer is y. For example, the relation educated at(x, y) can be mapped to \"Where did x study?\" and \"Which university did x graduate from?\". Given a particular entity x (\"Turing\") and a text that mentions x (\"Turing obtained his PhD from Princeton\"), a non-null answer to any of these questions (\"Princeton\") asserts the fact and also fills the slot y. Figure 1 illustrates a few more examples.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 577, |
|
"end": 585, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This reduction enables new ways of framing the learning problem. In particular, it allows us to perform zero-shot learning: define new relations \"on the fly\", after the model has already been trained. More specifically, the zero-shot scenario assumes access to labeled data for N relation types. This data is used to train a reading comprehension model through our reduction. However, at test time, we are asked about a previously unseen relation type R N +1 . Rather than providing labeled data for the new relation, we simply list questions that define the relation's slot values. Assuming we learned a good reading comprehension model, the correct values should be extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our zero-shot setup includes innovations both in data and models. We use distant supervision for a relatively large number of relations (120) from Wikidata (Vrande\u010di\u0107, 2012) , which are easily gathered in practice via the WikiReading dataset (Hewlett et al., 2016) . We introduce a crowdsourcing approach for gathering and verifying the questions for each relation. This process produced about 10 questions per relation on average, yielding a dataset of over 30,000,000 questionsentence-answer examples in total. Because questions are paired with relation types, not instances, this overall procedure has very modest costs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 173, |
|
"text": "(Vrande\u010di\u0107, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 264, |
|
"text": "(Hewlett et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The key modeling challenge is that most existing reading-comprehension problem formulations assume the answer to the question is always present in the given text. However, for relation extraction, this premise does not hold, and the model needs to reliably determine when a question is not answerable. We show that a recent state-of-the-art neural approach for reading comprehension (Seo et al., 2016) can be directly extended to model answerability and trained on our new dataset. This modeling approach is another advantage of our reduction: as machine reading models improve with time, so should our ability to extract relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 383, |
|
"end": 401, |
|
"text": "(Seo et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Experiments demonstrate that our approach generalizes to new paraphrases of questions from the training set, while incurring only a minor loss in performance (4% relative F1 reduction). Furthermore, translating relation extraction to the realm of reading comprehension allows us to extract a significant portion of previously unseen relations, from virtually zero to an F1 of 41%. Our analysis suggests that our model is able to generalize to these cases by learning typing information that occurs across many relations (e.g. the answer to \"Where\" is a location), as well as detecting relation paraphrases to a certain extent. We also find that there are many feasible cases that our model does not quite master, providing an interesting challenge for future work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We are interested in a particularly harsh zero-shot learning scenario: given labeled examples for N relation types during training, extract relations of a new type R N +1 at test time. The only information we have about R N +1 are parametrized questions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This setting differs from prior art in relation extraction. Bronstein et al. (2015) explore a similar zero-shot setting for event-trigger identification, in which R N +1 is specified by a set of trigger words at test time. They generalize by measuring the similarity between potential triggers and the given seed set using unsupervised methods. We focus instead on slot filling, where questions are more suitable descriptions than trigger words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 83, |
|
"text": "Bronstein et al. (2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Open information extraction (open IE) (Banko et al., 2007 ) is a schemaless approach for extracting facts from text. While open IE systems need no relation-specific training data, they often treat different phrasings as different relations. In this work, we hope to extract a canonical slot value independent of how the original text is phrased.", |
|
"cite_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 57, |
|
"text": "(Banko et al., 2007", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Universal schema (Riedel et al., 2013) represents open IE extractions and knowledge-base facts in a single matrix, whose rows are entity pairs and columns are relations. The redundant schema (each knowledge-base relation may overlap with multiple natural-language relations) enables knowledge-base population via matrix completion techniques. Verga et al. (2017) predict facts for entity pairs that were not observed in the original matrix; this is equivalent to extracting seen relation types with unseen entities (see Section 6.1). Rockt\u00e4schel et al. (2015) and Demeester et al. (2016) use inference rules to predict hidden knowledge-base relations from observed naturallanguage relations. This setting is akin to generalizing across different manifestations of the same relation (see Section 6.2) since a natural-language description of each target relation appears in the training data. Moreover, the information about the unseen relations is a set of explicit inference rules, as opposed to implicit natural-language questions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 38, |
|
"text": "(Riedel et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 362, |
|
"text": "Verga et al. (2017)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 559, |
|
"text": "Rockt\u00e4schel et al. (2015)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 587, |
|
"text": "Demeester et al. (2016)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our zero-shot scenario, in which no manifestation of the test relation is observed during training, is substantially more challenging (see Section 6.3). In universal-schema terminology, we add a new empty column (the target knowledgebase relation), plus a few new columns with a single entry each (reflecting the textual relations in the sentence). These columns share no entities with existing columns, making the rest of the matrix irrelevant. To fill the empty column from the others, we match their descriptions. Toutanova et al. (2015) proposed a similar approach that decomposes natural-language relations and computes their similarity in a universal schema setting; however, they did not extend their method to knowledge-base relations, nor did they attempt to recover out-of-schema relations as we do.", |
|
"cite_spans": [ |
|
{ |
|
"start": 517, |
|
"end": 540, |
|
"text": "Toutanova et al. (2015)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We consider the slot-filling challenge in relation extraction, in which we are given a knowledgebase relation R, an entity e, and a sentence s. For example, consider the relation occupation, the entity \"Steve Jobs\", and the sentence \"Steve Jobs was an American businessman, inventor, and industrial designer\". Our goal is to find a set of text spans A in s for which R(e, a) holds for each a \u2208 A. In our example, A = {businessman, inventor, industrial designer}. The empty set is also a valid answer (A = \u2205) when s does not contain any phrase that satisfies R(e, ?). We observe that given a natural-language question q that expresses R(e, ?) (e.g. \"What did Steve Jobs do for a living?\"), solving the reading comprehension problem of answering q from s is equivalent to solving the slot-filling challenge.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The challenge now becomes one of querification: translating R(e, ?) into q. Rather than querify R(e, ?) for every entity e, we propose a method of querifying the relation R. We treat e as a variable x, querify the parametrized query R(x, ?) (e.g. occupation(x, ?)) as a question template q x (\"What did x do for a living?\"), and then instantiate this template with the relevant entities, creating a tailored natural-language question for each entity e (\"What did Steve Jobs do for a living?\"). This process, schema querification, is by an order of magnitude more efficient than querifying individual instances because annotating a relation type automatically annotates all of its instances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Applying schema querification to N relations from a pre-existing relation-extraction dataset converts it into a reading-comprehension dataset. We then use this dataset to train a readingcomprehension model, which given a sentence s and a question q returns a set of text spans A within s that answer q (to the best of its ability).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the zero-shot scenario, we are given a new relation R N +1 (x, y) at test-time, which was neither specified nor observed beforehand. For example, the deciphered(x, y) relation, as in \"Turing and colleagues came up with a method for efficiently deciphering the Enigma\", is too domainspecific to exist in common knowledge-bases. We then querify R N +1 (x, y) into q x (\"Which code did x break?\") or q y (\"Who cracked y?\"), and run our reading-comprehension model for each sentence in the document(s) of interest, while instantiating the question template with different entities that might participate in this relation. 1 Each time the model returns a non-null answer a for a given question q e , it extracts the relation R N +1 (e, a).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Ultimately, all we need to do for a new relation is define our information need in the form of a question. 2 Our approach provides a naturallanguage API for application developers who are interested in incorporating a relation-extraction component in their programs; no linguistic knowledge or pre-defined schema is needed. To implement our approach, we require two components: training data and a reading-comprehension model. In Section 4, we construct a large relationextraction dataset and querify it using an efficient crowdsourcing procedure. We then adapt an existing state-of-the-art reading-comprehension model to suit our problem formulation (Section 5).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "To collect reading-comprehension examples as in Figure 2 , we first gather labeled examples for the task of relation-slot filling. Slot-filling examples are similar to reading-comprehension examples, but contain a knowledge-base query R(e, ?) instead of a natural-language question; e.g. spouse(Angela Merkel, ?) instead of \"Who is Angela Merkel married to?\". We collect many slot-filling examples via distant supervision, and then convert their queries into natural language.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 48, |
|
"end": 56, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We use the WikiReading dataset (Hewlett et al., 2016) to collect labeled slot-filling examples. WikiReading was collected by aligning each Wikidata (Vrande\u010di\u0107, 2012) relation R(e, a) with the corresponding Wikipedia article D for the entity e, under the reasonable assumption that the relation can be derived from the article's text. Each instance in this dataset contains a relation R, an entity e, a document D, and an answer a. We used distant supervision to select the specific sentences in which each R(e, a) manifests. Specifically, we took the first sentence s in D to contain both e and a. We then grouped instances by R, e, and s to merge all the answers for R(e, ?) given s into one answer set A. Figure 2 : Examples from our reading-comprehension dataset. Each instance contains a relation R, a question q, a sentence s, and an answer set A. The question explicitly mentions an entity e, which also appears in s. For brevity, answers are underlined instead of being displayed in a separate column.", |
|
"cite_spans": [ |
|
{ |
|
"start": 31, |
|
"end": 53, |
|
"text": "(Hewlett et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 165, |
|
"text": "(Vrande\u010di\u0107, 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 707, |
|
"end": 715, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(1) The wine is produced in the X region of France.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) X, the capital of Mexico, is the most populous city in North America.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) X is an unincorporated and organized territory of the United States.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(4) The X mountain range stretches across the United States and Canada. Schema Querification Crowdsourcing querification at the schema level is not straightforward, because the task has to encourage workers to (a) figure out the relation's semantics (b) be lexicallycreative when asking questions. We therefore apply a combination of crowdsourcing tactics over two Mechanical Turk annotation phases: collection and verification. For each relation R, we present the annotator with 4 example sentences, where the entity e in each sentence s is masked by the variable x. In addition, we underline the extractable answers a \u2208 A that appear in s (see Figure 3 ). The annotator must then come up with a question about x whose answer, given each sentence s, is the underlined span within that sentence. For example, \"In which country is x?\" captures the exact set of answers for each sentence in Figure 3 . Asking a more general question, such as \"Where is x?\" might return false positives (\"North America\" in sentence 2).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 646, |
|
"end": 654, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 889, |
|
"end": 897, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each worker produced 3 different question templates for each example set. For each relation, we sampled 3 different example sets, and hired 3 different annotators for each set. We ran one instance of this annotation phase where the workers were also given, in addition to the example set, the name of the relation (e.g. country), and another instance where it was hidden. Out of a potential 54 question templates, 40 were unique on average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the verification phase, we measure the question templates' quality by sampling additional sentences and instantiating each question template with the example entity e. Annotators are then asked to answer the question from the sentence s, or mark it as unanswerable; if the annotators' an-swers match A, the question template is valid. We discarded the templates that were not answered correctly in the majority of the examples (6/10). 3 Overall, we applied schema querification to 178 relations that had at least 100 examples each (accounting for 99.77% of the data), costing roughly $1,250. After the verification phase, we were left with 1,192 high-quality question templates spanning 120 relations. 4 We then join these templates with our slot-filling dataset along relations, instantiating each template q x with its matching entities. This process yields a reading-comprehension dataset of over 30,000,000 examples, where each instance contains the original relation R (unobserved by the machine), a question q, a sentence s, and the set of answers A (see Figure 2 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 438, |
|
"end": 439, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 705, |
|
"end": 706, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1064, |
|
"end": 1072, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Negative Examples To support relation extraction, our dataset deviates from recent reading comprehension formulations (Hermann et al., 2015; Rajpurkar et al., 2016) , and introduces negative examples -question-sentence pairs that have no answers (A = \u2205). Following the methodology of InfoboxQA (Morales et al., 2016) , we generate negative examples by matching (for the same entity e) a question q that pertains to one relation with a sentence s that expresses another relation. We also assert that the sentence does not contain the answer to q. For instance, we match \"Who is Angela Merkel married to?\" with a sentence about her occupation: \"Angela Merkel is a German politician who is currently the Chancellor of Germany\". This process generated over 2 million negative examples. While this is a relatively naive method of generating negative examples, our analysis shows that about a third of negative examples contain good distractors (see Section 7).", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 140, |
|
"text": "(Hermann et al., 2015;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 164, |
|
"text": "Rajpurkar et al., 2016)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 316, |
|
"text": "(Morales et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Discussion Some recent QA datasets were collected by expressing knowledge-base assertions in natural language. The Simple QA dataset (Bordes et al., 2015) was created by annotating questions about individual Freebase facts (e.g. educated at(T uring, P rinceton)), collecting roughly 100,000 natural-language questions to support QA against a knowledge graph. Morales et al. (2016) used a similar process to collect questions from Wikipedia infoboxes, yielding the 15,000-example InfoboxQA dataset. For the task of identifying predicate-argument structures, QA-SRL (He et al., 2015) was proposed as an open schema for semantic roles, in which the relation between an argument and a predicate is expressed as a natural-language question containing the predicate (\"Where was someone educated?\") whose answer is the argument (\"Princeton\"). The authors collected about 19,000 question-answer pairs from 3,200 sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 154, |
|
"text": "(Bordes et al., 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 359, |
|
"end": 380, |
|
"text": "Morales et al. (2016)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 581, |
|
"text": "(He et al., 2015)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In these efforts, the costs scale linearly in the number of instances, requiring significant investments for large datasets. In contrast, schema querification can generate an enormous amount of data for a fraction of the cost by labeling at the relation level; as evidence, we were able to generate a dataset 300 times larger than Simple QA. To the best of our knowledge, this is the first robust method for collecting a question-answering dataset by crowd-annotating at the schema level.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Slot-Filling Data", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a sentence s and a question q, our algorithm either returns an answer span 5 a within s, or indicates that there is no answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The task of obtaining answer spans to naturallanguage questions has been recently studied on the SQuAD dataset (Rajpurkar et al., 2016; Xiong et al., 2016; Lee et al., 2016; Wang et al., 2016) . In SQuAD, every question is answerable from the text, which is why these models assume that there exists a correct answer span. Therefore, we modify an existing model in a way that allows it to decide whether an answer exists. We first give a high-level description of the original model, and then describe our modification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 111, |
|
"end": 135, |
|
"text": "(Rajpurkar et al., 2016;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 136, |
|
"end": 155, |
|
"text": "Xiong et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 156, |
|
"end": 173, |
|
"text": "Lee et al., 2016;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 174, |
|
"end": 192, |
|
"text": "Wang et al., 2016)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We start from the BiDAF model (Seo et al., 2016) , whose input is two sequences of words: a sentence s and a question q. The model predicts the start and end positions y start , y end of the answer span in s. BiDAF uses recurrent neural networks to encode contextual information within s and q alongside an attention mechanism to align parts of q with s and vice-versa.", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 48, |
|
"text": "(Seo et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The outputs of the BiDAF model are the confidence scores of y start and y end , for each potential start and end. We denote these scores as z start , z end \u2208 R N , where N is the number of words in the sentence s. In other words, z start i indicates how likely the answer is to start at position i of the sentence (the higher the more likely); similarly, z end i indicates how likely the answer is to end at that index. Assuming the answer exists, we can transform these confidence scores into pseudo-probability distributions p start , p end via softmax. The probability of each i-to-j-span of the context can therefore be defined by:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "P (a = s i...j ) = p start i p end j (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where p i indicates the i-th element of the vector p i , i.e. the probability of the answer starting at i. Seo et al. (2016) obtain the span with the highest probability during post-processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 107, |
|
"end": 124, |
|
"text": "Seo et al. (2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To allow the model to signal that there is no answer, we concatenate a trainable bias b to the end of both confidences score vectors z start , z end . The new score vectorsz start ,z end \u2208 R N +1 are defined asz start = [z start ; b] and similarly for z end , where [; ] indicates row-wise concatenation. Hence, the last elements ofz start andz end indicate the model's confidence that the answer has no start or end, respectively. We apply softmax to these augmented vectors to obtain pseudo-probability distributions,p start ,p end . This means that the probability the model assigns to a null answer is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P (a = \u2205) =p start N +1p end N +1 .", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "If P (a = \u2205) is higher than the probability of the best span, arg max i,j\u2264N P (a = s i...j ), then the model deems that the question cannot be answered from the sentence. Conceptually, adding the bias enables the model to be sensitive to the absolute values of the raw confidence scores z start , z end . We are essentially setting and learning a threshold b that decides whether the model is sufficiently confident of the best candidate answer span. While this threshold provides us with a dynamic per-example decision of whether the instance is answerable, we can also set a global confidence threshold p min ; if the best answer's confidence is below that threshold, we infer that there is no answer. In Section 6.3 we use this global threshold to get a broader picture of the model's performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Model", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "To understand how well our method can generalize to unseen data, we design experiments for unseen entities (Section 6.1), unseen question templates (Section 6.2), and unseen relations (Section 6.3).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Evaluation Metrics Each instance is evaluated by comparing the tokens in the labeled answer set with those of the predicted span. 6 Precision is the true positive count divided by the number of times the system returned a non-null answer. Recall is the true positive count divided by the number of instances that have an answer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Hyperparameters In our experiments, we initialized word embeddings with GloVe (Pennington et al., 2014), and did not fine-tune them. The typical training set was an order of 1 million examples, for which 3 epochs were enough for convergence. All training sets had a ratio of 1:1 positive and negative examples, which was chosen to match the test sets' ratio.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Comparison Systems We experiment with several variants of our model. In KB Relation, we feed our model a relation indicator (e.g. R 17 ) instead of a question. We expect this variant to generalize reasonably well to unseen entities, but fail on unseen relations. The second variant (NL Relation) uses the relation's name (as a naturallanguage expression) instead of a question (e.g. educated at as \"educated at\"). We also consider a weakened version of our querification approach (Single Template) where, during training, only one question template per relation is observed. The full variant of our model, Multiple Templates, is trained on a more diverse set of questions. We expect this variant to have significantly better paraphrasing abilities than Single Template.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We also evaluate how asking about the same relation in multiple ways improves performance (Question Ensemble). We create an ensemble by sampling 3 questions per test instance and predicting the answer for each. We then choose the answer with the highest sum of confidence scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In addition to our model, we compare three other systems. The first is a random baseline that chooses a named entity in the sentence that does not appear in the question (Random NE). We also reimplement the RNN Labeler that was shown to have good results on the extractive portion of WikiReading (Hewlett et al., 2016) . Lastly, we retrain an off-the-shelf relation extraction system (Miwa and Bansal, 2016) , which has shown promising results on a number of benchmarks. This system (and many like it) represents relations as indicators, and cannot extract unseen relations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 296, |
|
"end": 318, |
|
"text": "(Hewlett et al., 2016)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 384, |
|
"end": 407, |
|
"text": "(Miwa and Bansal, 2016)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We show that our reading-comprehension approach works well in a typical relation-extraction setting by testing it on unseen entities and texts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Entities", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Setup We partitioned our dataset along entities in the question, and randomly clustered each entity into one of three groups: train, dev, or test. For instance, Alan Turing examples appear only in training, while Steve Jobs examples are exclusive to test. We then sampled 1,000,000 examples for train, 1,000 for dev, and 10,000 for test. This partition also ensures that the sentences at test time are different from those in train, since the sentences are gathered from each entity's Wikipedia article.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Entities", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Results Table 1 shows that our model generalizes well to new entities and texts, with little variance in performance between KB Relation, NL Relation, Multiple Templates, and Question Ensemble. Single Template performs significantly worse than these variants; we conjecture that simpler relation descriptions (KB Relation & NL Relation) allow for easier parameter tying across different examples, whereas learning from multiple questions allows the model to acquire important paraphrases. All variants of our model outperform off-the-shelf relation extraction systems (RNN Labeler and Miwa & Bansal) in this setting, demonstrating that reducing relation extraction to reading comprehension is indeed a viable approach for our Wikipedia slot-filling task. An analysis of 50 examples that Multiple Templates mispredicted shows that 36% of errors can be attributed to annotation errors (chiefly missing entries in Wikidata), and an additional 42% result from inaccurate span selection (e.g. \"8 February 1985 (e.g. \"8 February \" instead of \"1985 , for which our model is fully penalized. In total, only 18% of our sample were pure system errors, suggesting that our model is very close to the performance ceiling of this setting (slightly above 90% F1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 982, |
|
"end": 1004, |
|
"text": "(e.g. \"8 February 1985", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1005, |
|
"end": 1041, |
|
"text": "(e.g. \"8 February \" instead of \"1985", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 1", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unseen Entities", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "We test our method's ability to generalize to new descriptions of the same relation, by holding out a question template for each relation during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Question Templates", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Setup We created 10 folds of train/dev/test samples of the data, in which one question template for each relation was held out for the test set, and another for the development set. For instance, \"What did x do for a living?\" may appear only in the training set, while \"What is x's job?\" is exclusive to the test set. Each split was stratified by sampling N examples per question template (N = 1000, 10, 50 for train, dev, test, respectively). This process created 10 training sets of 966,000 examples with matching development and test sets of 940 and 4,700 examples each.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Question Templates", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We trained and tested Multiple Templates on each one of the folds, yielding performance on unseen templates. We then replicated the existing test sets and replaced the unseen question templates with templates from the training set, yielding performance on seen templates. Revisiting our example, we convert test-set occurrences of \"What is x's job?\" to \"What did x do for a living?\". Results Table 2 shows that our approach is able to generalize to unseen question templates.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 399, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unseen Question Templates", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Our system's performance on unseen questions is nearly as strong as for previously observed templates (losing roughly 3.5 points in F1).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Question Templates", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We examine a pure zero-shot setting, where testtime relations are unobserved during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Relations", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Setup We created 10 folds of train/dev/test samples, partitioned along relations: 84 relations for train, 12 dev, and 24 test. For example, when educated at is allocated to test, no educated at examples appear in train. Using stratified sampling of relations, we created 10 training sets of 840,000 examples each with matching dev and test sets of 600 and 12,000 examples per fold.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unseen Relations", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "Results Table 3 shows each system's performance; Figure 4 extends these results for variants of our model by applying a global threshold on the answers' confidence scores to generate precision/recall curves (see Section 5). As expected, representing knowledge-base relations as indicators (KB Relation and Miwa & Bansal) is insufficient in a zero-shot setting; they must be interpreted as natural-language expressions to allow for some generalization. The difference between using a single question template (Single Template) and the relation's name (NL Relation) appears to be minor. However, training on a variety of question templates (Multiple Templates) substantially increases performance. We conjecture that multiple phrasings of the same relation allows our model to learn answer-type paraphrases that occur across many relations (see Section 7). There is also some advantage to having multiple questions at test time (Question Ensemble).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 8, |
|
"end": 15, |
|
"text": "Table 3", |
|
"ref_id": "TABREF6" |
|
}, |
|
{ |
|
"start": 49, |
|
"end": 57, |
|
"text": "Figure 4", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Unseen Relations", |
|
"sec_num": "6.3" |
|
}, |
|
{ |
|
"text": "To understand how our method extracts unseen relations, we analyzed 100 random examples, of which 60 had answers in the sentence and 40 did not (negative examples).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For negative examples, we checked whether a distractor -an incorrect answer of the correct answer type -appears in the sentence. For example, the question \"Who is John McCain married to?\" does not have an answer in \"John McCain chose Sarah Palin as his running mate\", but \"Sarah Palin\" is of the correct answer type. We noticed that 14 negative examples (35%) contain distractors. When pairing these examples with the results from the unseen relations experiment in Section 6.3, we found that our method answered 2/14 of the distractor examples incorrectly, compared to only 1/26 of the easier examples. It appears that while most of the negative examples are easy, a significant portion of them are not trivial.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For positive examples, we observed that some instances can be solved by matching the relation in the sentence to that in the question, while others rely more on the answer's type. Moreover, we notice that each cue can be further categorized according to the type of information needed to detect it: (1) when part of the question appears verba- tim in the text, (2) when the phrasing in the text deviates from the question in a way that is typical of other relations as well (e.g. syntactic variability), (3) when the phrasing in the text deviates from the question in a way that is unique to this relation (e.g. lexical variability). We name these categories verbatim, global, and specific, respectively. Figure 5 illustrates all the different types of cues we discuss in our analysis. We selected the most important cue for solving each instance. If there were two important cues, each one was counted as half. Table 4 shows their distribution. Type cues appear to be somewhat more dominant than relation cues (58% vs. 42%). Half of the cues are relation-specific, whereas global cues account for one third of the cases and verbatim cues for one sixth. This is an encouraging result, because we can potentially learn to accurately recognize verbatim and global cues from other relations. However, our method was only able to exploit these cues partially.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 705, |
|
"end": 713, |
|
"text": "Figure 5", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 912, |
|
"end": 919, |
|
"text": "Table 4", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We paired these examples with the results from the unseen relations experiment in Section 6.3 to see how well our method performs in each category. Table 5 shows the results for the Multiple Templates setting. On one hand, the model appears agnostic to whether the relation cue is verbatim, global, or specific, and is able to correctly answer these instances with similar accuracy (there is no clear trend due to the small sample size). For examples that rely on typing information, the trend is much clearer; our model is much better at detecting global type cues than specific ones.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 148, |
|
"end": 155, |
|
"text": "Table 5", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Based on these observations, we think that the primary sources of our model's ability to generalize to new relations are: global type detection, which is acquired from training on many different relations, and relation paraphrase detection (of all types), which probably relies on its pre-trained word embeddings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We showed that relation extraction can be reduced to a reading comprehension problem, allowing us to generalize to unseen relations that are defined on-the-fly in natural language. However, the problem of zero-shot relation extraction is far from solved, and poses an interesting challenge to both the information extraction and machine reading communities. As research into machine reading progresses, we may find that more tasks can benefit from a similar approach. To support future work in this avenue, we make our code and data publicly available. 7", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "This can be implemented efficiently by constraining potential entities with existing facts in the knowledge base.For example, any entity x that satisfies occupation(x, cryptographer) or any entity y for which subclass of (y, cipher) holds. We leave the exact implementation details of such a system for future work.2 While we use questions, one can also use sentences with slots (clozes) to capture an almost identical notion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used this relatively lenient measure because many annotators selected the correct answer, but with a slightly incorrect span; e.g. \"American businessman\" instead of \"businessman\". We therefore used token-overlap F1 as a secondary filter, requiring an average score of at least 0.75. 4 58 relations had zero questions after verification due to noisy distant supervision and little annotator quality control.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "While our problem definition allows for multiple answer spans per question, our algorithm assumes a single span; in practice, less than 5% of our data has multiple answers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We ignore word order, case, punctuation, and articles (\"a\", \"an\", \"the\"). We also ignore \"and\", which often appears when a single span captures multiple correct answers (e.g. \"United States and Canada\").", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The research was supported in part by DARPA under the DEFT program (FA8750-13-2-0019), the ARO (W911NF-16-1-0121), the NSF (IIS-1252835, IIS-1562364), gifts from Google, Tencent, and Nvidia, and an Allen Distinguished Investigator Award. We also thank Mandar Joshi, Victoria Lin, and the UW NLP group for helpful conversations and comments on the work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Open information extraction from the web", |
|
"authors": [ |
|
{ |
|
"first": "Michele", |
|
"middle": [], |
|
"last": "Banko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Cafarella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Broadhead", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 20th International Joint Conference on Artifical Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2670--2676", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Et- zioni. 2007. Open information extraction from the web. In Proceedings of the 20th Interna- tional Joint Conference on Artifical Intelligence. Morgan Kaufmann Publishers Inc., San Fran- cisco, CA, USA, IJCAI'07, pages 2670-2676.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Large-scale simple question answering with memory networks", |
|
"authors": [ |
|
{ |
|
"first": "Antoine", |
|
"middle": [], |
|
"last": "Bordes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicolas", |
|
"middle": [], |
|
"last": "Usunier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sumit", |
|
"middle": [], |
|
"last": "Chopra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Weston", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1506.02075" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075 .", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Seed-based event trigger labeling: How far can event descriptions get us?", |
|
"authors": [ |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Bronstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qi", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ji", |
|
"middle": [], |
|
"last": "Heng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anette", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "372--376", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ofer Bronstein, Ido Dagan, Qi Li, Heng Ji, and Anette Frank. 2015. Seed-based event trigger labeling: How far can event descriptions get us? In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, pages 372-376. http://www.aclweb.org/anthology/P15-", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Lifted rule injection for relation embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Demeester", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1389--1399", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Demeester, Tim Rockt\u00e4schel, and Sebas- tian Riedel. 2016. Lifted rule injection for re- lation embeddings. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computa- tional Linguistics, Austin, Texas, pages 1389-1399. https://aclweb.org/anthology/D16-1146.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Question-answer driven semantic role labeling: Using natural language to annotate natural language", |
|
"authors": [ |
|
{ |
|
"first": "Luheng", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "643--653", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role la- beling: Using natural language to annotate natu- ral language. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, Lisbon, Portugal, pages 643-653. http://aclweb.org/anthology/D15-1076.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Teaching machines to read and comprehend", |
|
"authors": [ |
|
{ |
|
"first": "Karl", |
|
"middle": [], |
|
"last": "Moritz Hermann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom\u00e1\u0161", |
|
"middle": [], |
|
"last": "Ko\u010disk\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Grefenstette", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lasse", |
|
"middle": [], |
|
"last": "Espeholt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Will", |
|
"middle": [], |
|
"last": "Kay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mustafa", |
|
"middle": [], |
|
"last": "Suleyman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Phil", |
|
"middle": [], |
|
"last": "Blunsom", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teach- ing machines to read and comprehend. In Ad- vances in Neural Information Processing Systems. http://arxiv.org/abs/1506.03340.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Wikireading: A novel large-scale language understanding task over wikipedia", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Hewlett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Lacoste", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llion", |
|
"middle": [], |
|
"last": "Jones", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Illia", |
|
"middle": [], |
|
"last": "Polosukhin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Fandrianto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jay", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Kelcey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Berthelot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Conference of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. 2016. Wikireading: A novel large-scale language understanding task over wikipedia. In Proceedings of the Conference of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Knowledgebased weak supervision for information extraction of overlapping relations", |
|
"authors": [ |
|
{ |
|
"first": "Raphael", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Congle", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "541--550", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies- Volume 1. Association for Computational Linguis- tics, pages 541-550.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Learning recurrent span representations for extractive question answering", |
|
"authors": [ |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Kwiatkowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankur", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01436" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenton Lee, Tom Kwiatkowski, Ankur Parikh, and Di- panjan Das. 2016. Learning recurrent span repre- sentations for extractive question answering. arXiv preprint arXiv:1611.01436 .", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Effective crowd annotation for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Angli", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Soderland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Bragg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiao", |
|
"middle": [], |
|
"last": "Ling", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Weld", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "897--906", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angli Liu, Stephen Soderland, Jonathan Bragg, Christopher H. Lin, Xiao Ling, and Daniel S. Weld. 2016. Effective crowd annotation for relation ex- traction. In Proceedings of the 2016 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, San Diego, California, pages 897-906. http://www.aclweb.org/anthology/N16-1104.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "End-to-end relation extraction using lstms on sequences and tree structures", |
|
"authors": [ |
|
{ |
|
"first": "Makoto", |
|
"middle": [], |
|
"last": "Miwa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Bansal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1105--1116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Compu- tational Linguistics, Berlin, Germany, pages 1105- 1116. http://www.aclweb.org/anthology/P16-1105.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Learning to answer questions from wikipedia infoboxes", |
|
"authors": [ |
|
{ |
|
"first": "Alvaro", |
|
"middle": [], |
|
"last": "Morales", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Varot", |
|
"middle": [], |
|
"last": "Premtoon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cordelia", |
|
"middle": [], |
|
"last": "Avery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sue", |
|
"middle": [], |
|
"last": "Felshin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1930--1935", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alvaro Morales, Varot Premtoon, Cordelia Avery, Sue Felshin, and Boris Katz. 2016. Learning to answer questions from wikipedia infoboxes. In Proceed- ings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, pages 1930-1935. https://aclweb.org/anthology/D16-", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computa- tional Linguistics, Doha, Qatar, pages 1532-1543. http://www.aclweb.org/anthology/D14-1162.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Squad: 100,000+ questions for machine comprehension of text", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Rajpurkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Lopyrev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Conference of the Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. 2016. Squad: 100,000+ questions for machine comprehen- sion of text. In Proceedings of the Conference of the Empirical Methods in Natural Language Process- ing.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Relation extraction with matrix factorization and universal schemas", |
|
"authors": [ |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Limin", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Marlin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "74--84", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation ex- traction with matrix factorization and universal schemas. In Proceedings of the 2013 Confer- ence of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies. Association for Computa- tional Linguistics, Atlanta, Georgia, pages 74-84. http://www.aclweb.org/anthology/N13-1008.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Injecting logical background knowledge into embeddings for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rockt\u00e4schel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Riedel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1119--1129", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim Rockt\u00e4schel, Sameer Singh, and Sebastian Riedel. 2015. Injecting logical background knowledge into embeddings for relation extrac- tion. In Proceedings of the 2015 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies. Association for Computational Linguistics, Denver, Colorado, pages 1119-1129. http://www.aclweb.org/anthology/N15-1118.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Bidirectional attention flow for machine comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Minjoon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aniruddha", |
|
"middle": [], |
|
"last": "Kembhavi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hannaneh", |
|
"middle": [], |
|
"last": "Hajishirzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01603" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 .", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Representing text for joint embedding of text and knowledge bases", |
|
"authors": [ |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hoifung", |
|
"middle": [], |
|
"last": "Poon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pallavi", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Gamon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1499--1509", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Ga- mon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, Lisbon, Portugal, pages 1499- 1509. http://aclweb.org/anthology/D15-1174.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Generalizing to unseen entities and entity pairs with row-less universal schema", |
|
"authors": [ |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Verga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arvind", |
|
"middle": [], |
|
"last": "Neelakantan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mc-Callum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "613--622", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Patrick Verga, Arvind Neelakantan, and Andrew Mc- Callum. 2017. Generalizing to unseen entities and entity pairs with row-less universal schema. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers. Association for Computational Linguistics, Valencia, Spain, pages 613-622. http://www.aclweb.org/anthology/E17- 1058.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Wikidata: A new platform for collaborative data collection", |
|
"authors": [ |
|
{ |
|
"first": "Denny", |
|
"middle": [], |
|
"last": "Vrande\u010di\u0107", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 21st international conference companion on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1063--1064", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Denny Vrande\u010di\u0107. 2012. Wikidata: A new platform for collaborative data collection. In Proceedings of the 21st international conference companion on World Wide Web. ACM, pages 1063-1064.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Multi-perspective context matching for machine comprehension", |
|
"authors": [ |
|
{ |
|
"first": "Zhiguo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Mi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wael", |
|
"middle": [], |
|
"last": "Hamza", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Radu", |
|
"middle": [], |
|
"last": "Florian", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1612.04211" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context match- ing for machine comprehension. arXiv preprint arXiv:1612.04211 .", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Dynamic coattention networks for question answering", |
|
"authors": [ |
|
{ |
|
"first": "Caiming", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1611.01604" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604 .", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "An example of the annotator's input when querifying the country(x, ?) relation. The annotator is required to ask a question about x whose answer is, for each sentence, the underlined spans.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "Precision/Recall for unseen relations.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"text": "The different types of discriminating cues we observed among positive examples.", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "What is Albert Einstein's alma mater? Albert Einstein was awarded a PhD by the University of Z\u00fcrich, with his dissertation titled...occupationWhat did Steve Jobs do for a living? Steve Jobs was an American businessman, inventor, and industrial designer.spouse Who is Angela Merkel married to? Angela Merkel's second and current husband is quantum chemist and professor Joachim Sauer, who has largely...", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Relation</td><td>Question</td><td>Sentence & Answers</td></tr><tr><td>educated at</td><td/><td/></tr></table>" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"text": "Performance on unseen entities.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Seen</td><td>86.73%</td><td colspan=\"2\">86.54% 86.63%</td></tr><tr><td>Unseen</td><td>84.37%</td><td colspan=\"2\">81.88% 83.10%</td></tr></table>" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Performance on seen/unseen questions.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "Performance on unseen relations.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"text": "VerbatimRelationAndr\u00e1sDombai plays for what team? Andr\u00e1s Dombai... ...currently plays as a goalkeeper for FC Tatab\u00e1nya. Type Which airport is most closely associated with Royal Jordanian? Royal Jordanian Airlines... ...from its main base at Queen Alia International Airport... Relation Who was responsible for directing Les petites fugues? Les petites fugues is a 1979 Swiss comedy film directed by Yves Yersin. Type When was The Snow Hawk released? The Snow Hawk is a 1925 film... The F\u00fcrstenberg China Factory was founded... ...by Johann Georg von Langen... Type What voice type does\u00c9tienne Lainez have? Etienne Lainez... ...was a French operatic tenor...", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Global</td><td/></tr><tr><td>Relation</td><td>Who started F\u00fcrstenberg China?</td></tr><tr><td>Specific</td><td/></tr></table>" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"text": "The distribution of cues by type, based on a sample of 60.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td colspan=\"2\">Relation Type</td></tr><tr><td>Verbatim</td><td>43%</td><td>33%</td></tr><tr><td>Global</td><td>60%</td><td>73%</td></tr><tr><td>Specific</td><td>46%</td><td>18%</td></tr></table>" |
|
}, |
|
"TABREF10": { |
|
"type_str": "table", |
|
"text": "Our method's accuracy on subsets of examples pertaining to different cue types. Results in italics are based on a sample of less than 10.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |