|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:52:36.321663Z" |
|
}, |
|
"title": "Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractor with an Explanation Decoder", |
|
"authors": [ |
|
{ |
|
"first": "Zheng", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Arizona", |
|
"location": { |
|
"settlement": "Tucson", |
|
"region": "Arizona", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Arizona", |
|
"location": { |
|
"settlement": "Tucson", |
|
"region": "Arizona", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We introduce a method that transforms a rulebased relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We introduce a method that transforms a rulebased relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Information extraction (IE) is one of the key challenges in the natural language processing (NLP) field. With the explosion of unstructured information on the Internet, the demand for high-quality tools that convert free text to structured information continues to grow (Chang et al., 2010; Lee et al., 2013; Valenzuela-Escarcega et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 270, |
|
"end": 290, |
|
"text": "(Chang et al., 2010;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 308, |
|
"text": "Lee et al., 2013;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 343, |
|
"text": "Valenzuela-Escarcega et al., 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The past decades have seen a steady transition from rule-based IE systems (Appelt et al., 1993) to methods that rely on machine learning (ML) (see Related Work). While this transition has generally yielded considerable performance improvements, it was not without a cost. For example, in contrast to modern deep learning methods, the predictions of rule-based approaches are easily explainable, as a small number of rules tends to apply to each extraction. Further, in many situations, rule-based methods can be developed by domain experts with minimal training data. For these reasons, rule-based IE methods remain widely used in industry (Chiticariu et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 95, |
|
"text": "(Appelt et al., 1993)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 640, |
|
"end": 665, |
|
"text": "(Chiticariu et al., 2013)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work we demonstrate that this transition from rule-to ML-based IE can be performed such that the benefits of both worlds are preserved. In particular, we start with a rule-based relation ex-traction (RE) system (Angeli et al., 2015) and bootstrap a neural RE approach that is trained jointly with a decoder that learns to generate the rules that best explain each particular extraction. The contributions of our idea are the following:", |
|
"cite_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "(Angeli et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1) We introduce a strategy that jointly learns a RE classifier between pairs of entity mentions with a decoder that generates explanations for these extractions in the form of Tokensregex (Chang and or Semregex (Chambers et al., 2007) patterns. The only supervision for our method is a set of input rules (or patterns) in these two frameworks (Angeli et al., 2015) , which we use to generate positive examples for both the classifier and the decoder. We generate negative examples automatically from the sentences that contain positives examples.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 235, |
|
"text": "(Chambers et al., 2007)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 365, |
|
"text": "(Angeli et al., 2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(2) We evaluate our approach on the TACRED dataset (Zhang et al., 2017) and demonstrate that: (a) our neural RE classifier outperforms considerably the rule-based one we started from; (b) our decoder generates explanations with high accuracy, i.e., a BLEU overlap score between the generated rules and the gold, hand-written rules of over 90%; and, (c) joint learning improves the performance of both the classifier and decoder.", |
|
"cite_spans": [ |
|
{ |
|
"start": 51, |
|
"end": 71, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(3) We demonstrate that our approach generalizes to the situation where a vast amount of labeled training data is combined with a few rules. We combined the TACRED training data with the above rules and showed that when our method is trained on this combined data, the classifier obtains near state-of-art performance at 67.0% F1, while the decoder generates accurate explanations with a BLEU score of 92.4%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Relation extraction using statistical methods is well studied. Methods range from supervised, \"traditional\" approaches (Zelenko et al., 2003; Bunescu and Mooney, 2005) to neural methods. Neural approaches for RE range from methods that rely on simpler representations such as CNNs (Zeng et al., 2014) and RNNs (Zhang and Wang, 2015) to more complicated ones such as augmenting RNNs with different components (Xu et al., 2015; Zhou et al., 2016) , combining RNNs and CNNs (Vu et al., 2016; Wang et al., 2016) , and using mechanisms like attention (Zhang et al., 2017) or GCNs (Zhang et al., 2018) . To solve the lack of annotated data, distant supervision (Mintz et al., 2009; Surdeanu et al., 2012) is commonly used to generate a training dataset from an existing knowledge base. Jat et al. (2018) address the inherent noise in distant supervision with an entity attention method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 119, |
|
"end": 141, |
|
"text": "(Zelenko et al., 2003;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 167, |
|
"text": "Bunescu and Mooney, 2005)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 300, |
|
"text": "(Zeng et al., 2014)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 332, |
|
"text": "(Zhang and Wang, 2015)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 408, |
|
"end": 425, |
|
"text": "(Xu et al., 2015;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 444, |
|
"text": "Zhou et al., 2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 488, |
|
"text": "(Vu et al., 2016;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 507, |
|
"text": "Wang et al., 2016)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 546, |
|
"end": 566, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 575, |
|
"end": 595, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 655, |
|
"end": 675, |
|
"text": "(Mintz et al., 2009;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 676, |
|
"end": 698, |
|
"text": "Surdeanu et al., 2012)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 797, |
|
"text": "Jat et al. (2018)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Rule-based methods in IE have also been extensively investigated. Riloff (1996) developed a system that learns extraction patterns using only a pre-classified corpus of relevant and irrelevant texts. Lin and Pantel (2001) proposed a unsupervised method for discovering inference rules from text based on the Harris distributional similarity hypothesis (Harris, 1954) . Valenzuela-Esc\u00e1rcega et al. (2016) introduced a rule language that covers both surface text and syntactic dependency graphs. Angeli et al. (2015) further show that converting rule-based models to statistical ones can capture some of the benefits of both, i.e., the precision of patterns and the generalizability of statistical models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 79, |
|
"text": "Riloff (1996)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 221, |
|
"text": "Lin and Pantel (2001)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 366, |
|
"text": "(Harris, 1954)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 369, |
|
"end": 403, |
|
"text": "Valenzuela-Esc\u00e1rcega et al. (2016)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 514, |
|
"text": "Angeli et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Interpretability has gained more attention recently in the ML/NLP community. For example, some efforts convert neural models to more interpretable ones such as decision trees (Craven and Shavlik, 1996; Frosst and Hinton, 2017) . Some others focus on producing a post-hoc explanation of individual model outputs (Ribeiro et al., 2016; Hendricks et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 201, |
|
"text": "(Craven and Shavlik, 1996;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 226, |
|
"text": "Frosst and Hinton, 2017)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 333, |
|
"text": "(Ribeiro et al., 2016;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 357, |
|
"text": "Hendricks et al., 2016)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Inspired by these directions, here we propose an approach that combines the interpretability of rule-based methods with the performance and generalizability of neural approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our approach jointly addresses classification and interpretability through an encoder-decoder architecture, where the decoder uses multi-task learning (MTL) for relation extraction between pairs of named entities (Task 1) and rule generation (Task 2). Figure 1 summarizes our approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We define the RE task as follows. The inputs consist of a sentence W = [w 1 , . . . , w n ], and a pair of entities (called \"subject\" and \"object\") corresponding to two spans in this sentence:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W s = [w s 1 , . . . , w sn ] and W o = [w o 1 , . . . , w on ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The goal is to predict a relation r \u2208 R (from a predefined set of relation types) that holds between the subject and object or \"no relation\" otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For each sentence, we associate each word w i with a representation x x x i that concatenates three embeddings: x x x i = e e e(w i ) \u2022 e e e(n i ) \u2022 e e e(p i ), where e e e(w i ) is the word embedding of token i, e e e(n i ) is the NER embedding of token i, e e e(p i ) is the POS Tag embedding of token i. We feed these representations into a sentence-level bidirectional LSTM encoder (Hochreiter and Schmidhuber, 1997) :", |
|
"cite_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 422, |
|
"text": "(Hochreiter and Schmidhuber, 1997)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "[h h h1, . . . , h h hn] = LSTM([x x x1, . . . , x x xn])", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Following (Zhang et al., 2018) , we extract the \"K-1 pruned\" dependency tree that covers the two entities, i.e., the shortest dependency path between two entities enhanced with all tokens that are directly attached to the path, and feed it into a GCN (Kipf and Welling, 2016) layer:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 30, |
|
"text": "(Zhang et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "h h h (l) i = \u03c3( n j=1\u00c3 ijW W W (l) h h h (l\u22121) j /di + b b b (l) )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where A A A is the corresponding adjacency matrix, \u00c3 \u00c3 A = A A A + I I I with I I I being the n \u00d7 n identity matrix,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "d i = n j=1\u00c3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "ij is the degree of token i in the resulting graph, and W W W (l) is linear transformation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Lastly, we concatenate the sentence representation, the subject entity representation, and the object entity representation as follows: Figure 1 : Neural architecture of the proposed multitask learning approach. The input is a sequence of words together with NER labels and POS tags. The pair of entities to be classified (\"subject\" in blue and \"object\" in orange) are also provided. We use a concatenation of several representations, including embeddings of words, NER labels, and POS tags. The encoder uses a sentence-level bidirectional LSTM (biLSTM) and graph convolutional networks (GCN). There are pooling layers for the subject, object, and full sentence GCN outputs. The concatenated pooling outputs are fed to the classifier's feedforward layer. The decoder is an LSTM with an attention mechanism.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 144, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 1: Relation Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The rule decoder's goal is to generate the pattern P that extracted the corresponding data point, where P is represented as a sequence of tokens in the corresponding pattern language:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "P = [p 1 , . . . , p n ].", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For example, the pattern (([{kbpentity:true}]+)/was/ /born/ /on/([{slotvalue:true}]+)) (where kbpentity:true marks subject tokens, and slotvalue:true marks object tokens) extracts mentions of the per:date_of_birth relation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We implemented this decoder using an LSTM with an attention mechanism. To center rule decoding around the subject and object, we first feed the concatenation of subject and object representation from the encoder as the initial state in the decoder. Then, in each timestep t, we generate the attention context vector C C C D t by using the current hidden state of the decoder, h h h D t :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "s s st(j) = h h h E (L) W W W A h h h D t (7) a a at = softmax(s s st) (8) C C C D t = j a a at(j)h h h E j", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "where W W W A is a learned matrix, and h h h E (L) are hidden representations from the encoder's GCN.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We feed this C C C D t vector to a single feed forward layer that is coupled with a softmax function and Table 1 : Results on the TACRED test partition, including ablation experiments (the \"w/o\" rows). We experimented with two configurations: Rule-only data uses only training examples generated by rules; Rules + TA-CRED training data applies the previous rules to the training dataset from TACRED.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 112, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "use its output to obtain a probability distribution over the pattern vocabulary. We use cross entropy to calculate the losses for both the classifier and decoder. To balance the loss between classifier and decoder, we normalize the decoder loss by the pattern length. Note that for the data points without an existing rule, we only calculate the classifier loss. Formally, the joint loss function is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "loss = lossc + loss d /length(P )", |
|
"eq_num": "(10)" |
|
} |
|
], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4 Experiments Data Preparation: We report results on the TA-CRED dataset (Zhang et al., 2017) . We bootstrap Table 2 : Examples of mistakes in the decoded rules. We highlight in the hand-written rules the tokens that were missed during decoding (false negatives) in green, and in the decoded rules we highlight the spurious tokens (false positives) in red. our models from the patterns in the rule-based system of Angeli et al. (2015) , which uses 4,528 surface patterns (in the Tokensregex language) and 169 patterns over syntactic dependencies (using Semgrex). We experimented with two configurations: rule-only data and rules + TACRED training data. In the former setting, we use solely positive training examples generated by the above rules. We combine these positive examples with negative ones generated automatically by assigning 'no_relation' to all other entity mention pairs in the same sentence where there is a positive example. 1 We generated 3,850 positive and 12,311 negative examples for this configuration. In the latter configuration, we apply the same rules to the entire TACRED training dataset. 2", |
|
"cite_spans": [ |
|
{ |
|
"start": 73, |
|
"end": 93, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 414, |
|
"end": 434, |
|
"text": "Angeli et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 942, |
|
"end": 943, |
|
"text": "1", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 116, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Baselines: We compare our approach with two baselines: the rule-based system of Zhang et al. (2017) , and the best non-combination method of Zhang et al. (2018) . The latter method uses an LSTM and GCN combination similar to our encoder. 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 99, |
|
"text": "Zhang et al. (2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 141, |
|
"end": 160, |
|
"text": "Zhang et al. (2018)", |
|
"ref_id": "BIBREF32" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Implementation Details: We use pre-trained GloVe vectors (Pennington et al., 2014) to initialize 1 During the generation of these negative examples we filtered out pairs corresponding to inverse and symmetric relations. For example, if a sentence contains a relation (Subj, Rel, Obj), we do not generate the negative (Obj, no_relation, Subj) if Rel has an inverse relation, e.g., per:children is the inverse of per:parents.", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 82, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2 Thus, some training examples in this case will be associated with a rule and some will not. We adjusted the loss function to use only the classification loss when no rule applies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3 For a fair comparison, we do not compare against ensemble methods, or transformer-based ones. Also, note that this baseline does not use rules at all. our word embeddings. We use the Adagrad optimizer (Duchi et al., 2011) . We apply entity masking to subject and object entities in the sentence, which is replacing the original token with a special <NER>-SUBJ or <NER>-OBJ token where <NER> is the corresponding name entity label provided by TACRED.", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 223, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We used micro precision, recall, and F1 scores to evaluate the RE classifier. We used the BLEU score to measure the quality of generated rules, i.e., how close they are to the corresponding gold rules that extracted the same output. We used the BLEU implementation in NLTK (Loper and Bird, 2002) , which allows us to calculate multi-reference BLEU scores over 1 to 4 grams. 4 We report BLEU scores only over the non 'no_relation' extractions with the corresponding testing data points that are matched by one of the rules in (Zhang et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 273, |
|
"end": 295, |
|
"text": "(Loper and Bird, 2002)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 374, |
|
"end": 375, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 525, |
|
"end": 545, |
|
"text": "(Zhang et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Results and Discussion: Table 1 reports the overall performance of our approach, the baselines, and ablation settings, for the two configurations investigated. We draw the following observations from these results:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(1) The rule-based method of Zhang et al. (2017) has high precision but suffers from low recall. In contrast, our approach that is bootstrapped from the same information has 13% higher recall and almost 9% higher F1 (absolute). Further, our approach decodes explanatory rules with a high BLEU score of 90%, which indicates that it maintains almost the entire explanatory power of the rule-based method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 48, |
|
"text": "Zhang et al. (2017)", |
|
"ref_id": "BIBREF33" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(2) The ablation experiments indicate that joint training for classification and explainability helps both tasks, in both configurations. This indicates that performance and explainability are interconnected.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "(3) The two configurations analyzed in the table demonstrate that our approach performs well not only when trained solely on rules, but also when rules are combined with a training dataset annotated for RE. This suggests that our direction may be a general strategy to infuse some explainability in a statistical method, when rules are available during training. (4) Table 3 lists the learning curve for our approach in the rule-only data configuration when the amount of rules available varies. 5 This table shows that our approach obtains a higher F1 than the complete rule-based RE classifier even when using only 40% of the rules. 6 (5) Note that the BLEU score provides an incomplete evaluation of rule quality. To understand if the decoded rules explain their corresponding data point, we performed a manual evaluation on 176 decoded rules. We classified them into three categories: (a) the rules correctly explain the prediction (according to the human annotator), (b) they approximately explain the prediction, and (c) they do not explain the prediction. Class (b) contains rules that do not lexically match the input text, but capture the correct semantics, as shown in Table 2. The percentages we measured were: (a) 33.5%, (b) 31.3%, (c) 26.1%. 9% of these rules were skipped in the evaluation because they were false negatives( which are labeled as no relation falsely by our model). These numbers support our hypothesis that, in general, the decoded rules do explain the classifier's prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 496, |
|
"end": 497, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 374, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Further, out of 750 data points associated with rules in the evaluation data, our method incorrectly classifies only 26. Out of these 26, 16 were false negatives, and had no rules decoded. In the other 10 predictions, 7 rules fell in class (b) (see the examples in Table 2 ). The other 3 were incorrect due to ambiguity, i.e., the pattern created is an ambiguous succession of POS tags or syntactic dependencies without any lexicalization. This suggests that, even when our classifier is incorrect, the rules decoded tend to capture the underlying semantics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 272, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Task 2: Rule Decoder", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We introduced a strategy that jointly bootstraps a relation extraction classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of example patterns that match such relations. Our experiments on the TACRED dataset demonstrated that our approach outperforms the strong rule-based method that provided the training patterns by 9 F1 points, while decoding explanations at over 90% BLEU score. Further, we showed that the joint training of the classification and explanation components performs better than training them separately. All in all, our work suggests that it is possible to marry the interpretability of rule-based methods with the performance of neural approaches.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We use the dependency parse trees, POS and NER sequences as included in the original release of the TACRED dataset, which was generated with Stanford CoreNLP . We use the pretrained 300-dimensional GloVe vectors (Pennington et al., 2014) to initialize word embeddings. We use a 2 layers of bi-LSTM, 2 layers of GCN, and 2 layers of feedforward in our encoder. And 2 layers of LSTM and 1 layer of feedforward in our decoder. Table 4 shows the details of the proposed neural network. We apply the ReLU function for all nonlinearities in the GCN layers and the standard max pooling operations in all pooling layers. For regularization we use dropout with p = 0.5 to all encoder LSTM layers and all but the last GCN layers.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 237, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 424, |
|
"end": 431, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A Experimental Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For training, we use Adagrad (Duchi et al., 2011) an initial learning rate, and from epoch 1 we start to anneal the learning rate by a factor of 0.9 every time the F1 score on the development set does not increase after one epoch. We tuned the initial learning rate between 0.01 and 1; we chose 0.3 as this obtained the best performance on development.", |
|
"cite_spans": [ |
|
{ |
|
"start": 29, |
|
"end": 49, |
|
"text": "(Duchi et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Experimental Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We trained 100 epochs for all the experiments with a batch size of 50. There were 3,850 positive data points and 12,311 negative data in the rule-only data. For this dataset, it took 1 minute to finish one epoch in average. And for Rules + TACRED training data, it took 4 minutes to finish one epoch in average 7 . All the hyperparameters above were tuned manually. We trained our model on PyTorch 3.8.5 with CUDA version 10.0, using one NVDIA Titan RTX.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Experimental Details", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "You can find the details of TACRED data in this link: https://nlp.stanford.edu/ projects/tacred/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Dataset Introduction", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The rule-base system we use is the combination of Stanford's Tokensregex (Chang and and Semregex (Chambers et al., 2007) . The rules we use are from the system of Angeli et al. (2015) , which contains 4528 Tokensregex patterns and 169 Semgrex patterns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 120, |
|
"text": "(Chambers et al., 2007)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 183, |
|
"text": "Angeli et al. (2015)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Rules", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We extracted the rules from CoreNLP and mapped each rule to the TACRED dataset. We provided the mapping files in our released dataset. We also generate the dataset with only datapoints matched by rules in TACRED training partition and its mapping file.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C Rules", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "h h hsent = f (h h h(L) ) = f (GCN(h h h (0 )) (3) h h hs = f (h h h (L) s 1 :sn ) (4) h h ho = f (h h h (L) o 1 :on ) (5) h h h f inal = h h hsent \u2022 h h hs \u2022 h h ho (6)where h h h(l) denotes the collective hidden representations at layer l of the GCN, and f : R d\u00d7n \u2192 R d is a max pooling function that maps from n output vectors to the representation vector. The concatenated representation h h h f inal is fed to a feedforward layer with a softmax function to produce a probability distribution over relation types.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We scored longer n-grams to better capture rule syntax.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For this experiment we sorted the rules in descending order of their match frequency in training, and kept the top n% in each setting.6 The high BLEU score in the 20% configuration is due to the small sample in development for which gold rules exist.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The software is available at this URL: https://github.com/clulab/releases/tree/master/naacl-trustnlp2021-edin.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Bootstrapped self training for knowledge base population. Theory and Applications of Categories", |
|
"authors": [ |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Chaganty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bolton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Melvin Jose Johnson", |
|
"middle": [], |
|
"last": "Premkumar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Panupong", |
|
"middle": [], |
|
"last": "Pasupat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabor Angeli, Victor Zhong, Danqi Chen, A. Cha- ganty, J. Bolton, Melvin Jose Johnson Premkumar, Panupong Pasupat, S. Gupta, and Christopher D. Manning. 2015. Bootstrapped self training for knowledge base population. Theory and Applica- tions of Categories.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Fastus: A finite-state processor for information extraction from real-world text", |
|
"authors": [ |
|
{ |
|
"first": "Jerry", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Douglas E Appelt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hobbs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bear", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "IJCAI", |
|
"volume": "93", |
|
"issue": "", |
|
"pages": "1172--1178", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Douglas E Appelt, Jerry R Hobbs, John Bear, David Is- rael, and Mabry Tyson. 1993. Fastus: A finite-state processor for information extraction from real-world text. In IJCAI, volume 93, pages 1172-1178.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A shortest path dependency kernel for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Razvan", |
|
"middle": [], |
|
"last": "Bunescu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "724--731", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extrac- tion. In Proceedings of Human Language Technol- ogy Conference and Conference on Empirical Meth- ods in Natural Language Processing, pages 724- 731.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Learning alignments and leveraging natural logic", |
|
"authors": [ |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Cer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trond", |
|
"middle": [], |
|
"last": "Grenager", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chloe", |
|
"middle": [], |
|
"last": "Kiddon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Ramage", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "165--170", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, Marie- Catherine de Marneffe, Daniel Ramage, Eric Yeh, and Christopher D. Manning. 2007. Learning align- ments and leveraging natural logic. In Proceedings of the ACL-PASCAL Workshop on Textual Entail- ment and Paraphrasing, pages 165-170, Prague. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Tokensregex: Defining cascaded regular expressions over tokens", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Angel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angel X Chang and Christopher D Manning. 2014. To- kensregex: Defining cascaded regular expressions over tokens. Stanford University Computer Science Technical Reports. CSTR, 2:2014.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Stanford-ubc entity linking at tac-kbp", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Angel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Valentin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Spitkovsky", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eneko", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Agirre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Angel X Chang, Valentin I Spitkovsky, Eric Yeh, Eneko Agirre, and Christopher D Manning. 2010. Stanford-ubc entity linking at tac-kbp.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Rule-based information extraction is dead! long live rule-based information extraction systems! In Proceedings of the 2013 conference on empirical methods in natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Laura", |
|
"middle": [], |
|
"last": "Chiticariu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunyao", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederick", |
|
"middle": [], |
|
"last": "Reiss", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "827--832", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Laura Chiticariu, Yunyao Li, and Frederick Reiss. 2013. Rule-based information extraction is dead! long live rule-based information extraction systems! In Pro- ceedings of the 2013 conference on empirical meth- ods in natural language processing, pages 827-832.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Extracting tree-structured representations of trained networks", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Craven", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jude", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Shavlik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "24--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Craven and Jude W Shavlik. 1996. Extracting tree-structured representations of trained networks. In Advances in neural information processing sys- tems, pages 24-30.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Adaptive subgradient methods for online learning and stochastic optimization", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Duchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Hazan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoram", |
|
"middle": [], |
|
"last": "Singer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of machine learning research", |
|
"volume": "", |
|
"issue": "7", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Distilling a neural network into a soft decision tree", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Frosst", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1711.09784" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas Frosst and Geoffrey Hinton. 2017. Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Distributional structure. Word", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Zellig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Harris", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1954, |
|
"venue": "", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "146--162", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Generating visual explanations", |
|
"authors": [ |
|
{ |
|
"first": "Lisa", |
|
"middle": [ |
|
"Anne" |
|
], |
|
"last": "Hendricks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeynep", |
|
"middle": [], |
|
"last": "Akata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcus", |
|
"middle": [], |
|
"last": "Rohrbach", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Donahue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernt", |
|
"middle": [], |
|
"last": "Schiele", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Darrell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "European Conference on Computer Vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. 2016. Generating visual explanations. In European Conference on Computer Vision, pages 3-19. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Long short-term memory", |
|
"authors": [ |
|
{ |
|
"first": "Sepp", |
|
"middle": [], |
|
"last": "Hochreiter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J\u00fcrgen", |
|
"middle": [], |
|
"last": "Schmidhuber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Neural computation", |
|
"volume": "9", |
|
"issue": "8", |
|
"pages": "1735--1780", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Improving distantly supervised relation extraction using word and entity based attention", |
|
"authors": [ |
|
{ |
|
"first": "Sharmistha", |
|
"middle": [], |
|
"last": "Jat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siddhesh", |
|
"middle": [], |
|
"last": "Khandelwal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Partha", |
|
"middle": [], |
|
"last": "Talukdar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1804.06987" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharmistha Jat, Siddhesh Khandelwal, and Partha Talukdar. 2018. Improving distantly supervised rela- tion extraction using word and entity based attention. arXiv preprint arXiv:1804.06987.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semisupervised classification with graph convolutional networks", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Kipf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1609.02907" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", |
|
"authors": [ |
|
{ |
|
"first": "Heeyoung", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Angel", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yves", |
|
"middle": [], |
|
"last": "Peirsman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Chambers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Computational Linguistics", |
|
"volume": "39", |
|
"issue": "4", |
|
"pages": "885--916", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/COLI_a_00152" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference reso- lution based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Dirt -discovery of inference rules from text", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pantel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dekang Lin and P. Pantel. 2001. Dirt -discovery of inference rules from text.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Nltk: The natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics. Philadelphia: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The Stanford CoreNLP natural language processing toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Finkel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Bethard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Mc-Closky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Association for Computational Linguistics (ACL) System Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "55--60", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Distant supervision for relation extraction without labeled data", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Mintz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rion", |
|
"middle": [], |
|
"last": "Snow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Jurafsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1003--1011", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher D", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Why should i trust you?: Explaining the predictions of any classifier", |
|
"authors": [ |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Marco Tulio Ribeiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1135--1144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Automatically generating extraction patterns from untagged text", |
|
"authors": [ |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the national conference on artificial intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1044--1049", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ellen Riloff. 1996. Automatically generating extrac- tion patterns from untagged text. In Proceedings of the national conference on artificial intelligence, pages 1044-1049.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Multi-instance multi-label learning for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julie", |
|
"middle": [], |
|
"last": "Tibshirani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramesh", |
|
"middle": [], |
|
"last": "Nallapati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "455--465", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D. Manning. 2012. Multi-instance multi-label learning for relation extraction. In Pro- ceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning, pages 455- 465, Jeju Island, Korea. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Large-scale automated machine reading discovers new cancer driving mechanisms", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ozgun", |
|
"middle": [], |
|
"last": "Valenzuela-Escarcega", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gus", |
|
"middle": [], |
|
"last": "Babur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dane", |
|
"middle": [], |
|
"last": "Hahn-Powell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Bell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Enrique", |
|
"middle": [], |
|
"last": "Hicks", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xia", |
|
"middle": [], |
|
"last": "Noriega-Atala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Emek", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clayton", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Demir", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Morrison", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Database: The Journal of Biological Databases and Curation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/database/bay098" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco A. Valenzuela-Escarcega, Ozgun Babur, Gus Hahn-Powell, Dane Bell, Thomas Hicks, Enrique Noriega-Atala, Xia Wang, Mihai Surdeanu, Emek Demir, and Clayton T. Morrison. 2018. Large-scale automated machine reading discovers new cancer driving mechanisms. Database: The Journal of Bio- logical Databases and Curation.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Odin's runes: A rule language for information extraction", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Marco", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gus", |
|
"middle": [], |
|
"last": "Valenzuela-Esc\u00e1rcega", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Hahn-Powell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "322--329", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco A. Valenzuela-Esc\u00e1rcega, Gus Hahn-Powell, and Mihai Surdeanu. 2016. Odin's runes: A rule lan- guage for information extraction. In Proceedings of the Tenth International Conference on Language Re- sources and Evaluation (LREC'16), pages 322-329, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Combining recurrent and convolutional neural networks for relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Ngoc", |
|
"middle": [ |
|
"Thang" |
|
], |
|
"last": "Vu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heike", |
|
"middle": [], |
|
"last": "Adel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pankaj", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "534--539", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N16-1065" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ngoc Thang Vu, Heike Adel, Pankaj Gupta, and Hin- rich Sch\u00fctze. 2016. Combining recurrent and con- volutional neural networks for relation classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 534-539, San Diego, California. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Relation classification via multi-level attention CNNs", |
|
"authors": [ |
|
{ |
|
"first": "Linlin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhu", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "De Melo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhiyuan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1298--1307", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-1123" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. 2016. Relation classification via multi-level attention CNNs. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1298- 1307, Berlin, Germany. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Classifying relations via long short term memory networks along shortest dependency paths", |
|
"authors": [ |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lili", |
|
"middle": [], |
|
"last": "Mou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ge", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yunchuan", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hao", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhi", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1785--1794", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D15-1206" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying relations via long short term memory networks along shortest depen- dency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Process- ing, pages 1785-1794, Lisbon, Portugal. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Kernel methods for relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Zelenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chinatsu", |
|
"middle": [], |
|
"last": "Aone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Richardella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "1083--1106", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. Journal of machine learning research, 3(Feb):1083-1106.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Relation classification via convolutional deep neural network", |
|
"authors": [ |
|
{ |
|
"first": "Daojian", |
|
"middle": [], |
|
"last": "Zeng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kang", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siwei", |
|
"middle": [], |
|
"last": "Lai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Guangyou", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2335--2344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335-2344.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Relation classification via recurrent neural network", |
|
"authors": [ |
|
{ |
|
"first": "Dongxu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1508.01006" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dongxu Zhang and Dong Wang. 2015. Relation classi- fication via recurrent neural network. arXiv preprint arXiv:1508.01006.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Graph convolution over pruned dependency trees improves relation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2205--2215", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1244" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2205-2215, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Positionaware attention and supervised data improve slot filling", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Zhong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "35--45", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 35-45.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Attention-based bidirectional long short-term memory networks for relation classification", |
|
"authors": [ |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Tian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhenyu", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bingchen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hongwei", |
|
"middle": [], |
|
"last": "Hao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "207--212", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P16-2034" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 207-212, Berlin, Germany. Association for Compu- tational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "{kbpentity:true}]+)\"\"\" based \"\"in\"([{slotvalue:true}]+)) (([{kbpentity:true}]+)\"in\"([{slotvalue:true}]+)) (([{kbpentity:true}]+)\" CEO \"([{slotvalue:true}]+)) (([{kbpentity:true}]+)\" president \"([{slotvalue:true}]+))", |
|
"num": null, |
|
"type_str": "figure", |
|
"uris": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"text": "Learning curve of our approach based on amount of rules used, in the rule-only data configuration. These results are on TACRED development.", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "Details of our neural architecture.", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |