ACL-OCL / Base_JSON /prefixR /json /repl4nlp /2021.repl4nlp-1.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:59:36.140679Z"
},
"title": "Entity and Evidence Guided Document-Level Relation Extraction",
"authors": [],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Document-level relation extraction is a challenging task, requiring reasoning over multiple sentences to predict a set of relations in a document. In this paper, we propose a novel framework E2GRE (Entity and Evidence Guided Relation Extraction) that jointly extracts relations and the underlying evidence sentences by using large pretrained language model (LM) as input encoder. First, we propose to guide the pretrained LM's attention mechanism to focus on relevant context by using attention probabilities as additional features for evidence prediction. Furthermore, instead of feeding the whole document into pretrained LMs to obtain entity representation, we concatenate document text with head entities to help LMs concentrate on parts of the document that are more related to the head entity. Our E2GRE jointly learns relation extraction and evidence prediction effectively, showing large gains on both these tasks, which we find are highly correlated. Our experimental result on DocRED, a large-scale document-level relation extraction dataset, is competitive with the top of the public leaderboard for relation extraction, and is top ranked on evidence prediction, which shows that our E2GRE is both effective and synergistic on relation extraction and evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Relation Extraction (RE), the problem of predicting relations between pairs of entities from text, has received increasing research attention in recent years [Zhang et al., 2017; Zhao et al., 2019; Guo et al., 2019] . This problem has important downstream applications to numerous tasks, such as automatic knowledge acquisition from web documents for knowledge graph construction [Trisedya et al., 2019] , question answering [Yu et al., 2017] and dialogue systems [Young et al., 2018] . While most Document: [0] The Legend of Zelda : The Minish Cap ( ) is an action -adventure game and the twelfth entry in The Legend of Zelda series.",
"cite_spans": [
{
"start": 158,
"end": 178,
"text": "[Zhang et al., 2017;",
"ref_id": "BIBREF30"
},
{
"start": 179,
"end": 197,
"text": "Zhao et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 198,
"end": 215,
"text": "Guo et al., 2019]",
"ref_id": "BIBREF7"
},
{
"start": 380,
"end": 403,
"text": "[Trisedya et al., 2019]",
"ref_id": "BIBREF18"
},
{
"start": 425,
"end": 442,
"text": "[Yu et al., 2017]",
"ref_id": "BIBREF27"
},
{
"start": 464,
"end": 484,
"text": "[Young et al., 2018]",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[1] Developed by Capcom and Flagship , with Nintendo overseeing the development process , it was released for the Game Boy Advance handheld game console in Japan and Europe in 2004 and in North America and Australia the following year . [2] In June 2014 , it was made available on the Wii U Virtual Console . [3] The Minish Cap is the third Zelda game that involves the legend of the Four Sword , expanding on the story of and .",
"cite_spans": [
{
"start": 237,
"end": 240,
"text": "[2]",
"ref_id": null
},
{
"start": 309,
"end": 312,
"text": "[3]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[4] A magical talking cap named Ezlo can shrink series protagonist Link to the size of the Minish , a bug -sized race that live in Hyrule .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[5] The game retains some common elements from previous Zelda installments , such as the presence of Gorons , while introducing Kinstones and other new gameplay features .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[6] The Minish Cap was generally well received among critics . [7] It was named the 20th best Game Boy Advance game in an IGN feature , and was selected as the 2005 Game Boy Advance Game of the Year by GameSpot . Head Entity: Link Tail Entity: The Legend of Zelda Relation: \"Present in Work\" Evidence Sentences: 0,3,4",
"cite_spans": [
{
"start": 63,
"end": 66,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Figure 1: An example document in the DocRED dataset, where a head and tail entity pair span across multiple sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "previous work focus on relation extraction at the sentence level, in real world applications, e.g predicting relations from web articles, the majority of relations are expressed across multiple sentences. Figure 1 shows an example from the recently released DocRED dataset [Yao et al., 2019] , which requires reasoning over three evidence sentences to predict the relational fact that \"Link\" is present in the work \"The Legend of Zelda\". In this paper, we focus on the more challenging task of documentlevel relation extraction task and design a method to facilitate document-level reasoning.",
"cite_spans": [
{
"start": 273,
"end": 291,
"text": "[Yao et al., 2019]",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Aside from extracting entity relations from a document, it is often useful to also highlight the evidence that a system uses to predict them, so that a human or second system can verify them for consistency. What is more, evidence prediction can potentially supplement RE performance by restricting the model's focus on the correct context. In preliminary experiments, we find that current models are able to achieve around 87% RE F1 on DocRED by only keeping the gold evidence sentences when trained and evaluated only on the gold evidence sentences, which is a significant im- provement on current leaderboard DocRED RE F1 numbers (\u223c 63% RE F1). However, evidence prediction is a challenging task, and most existing relation extraction (RE) approaches ignore the task of evidence prediction entirely.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most recent approaches for relation extraction fine-tune large pretrained Language Models (LMs) (e.g.,BERT [Devlin et al., 2019] , RoBERTa ) as input encoder. However, naively adapting pretrained LMs for document-level RE faces an issue which limits its performance. Due to the length of a given document, many more entities and relations exist in document-level RE than in intra-sentence RE. A pretrained LM has to simultaneously encode information regarding all pairs of entities for relation extraction, making the task more difficult, and limiting the pretrained LM's effectiveness.",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "[Devlin et al., 2019]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we propose a new framework: Entity and Evidence Guided Relation Extraction (E2GRE), which jointly solves relation extraction and evidence prediction. For evidence prediction, we take a pretrained LM as input encoder and use its internal attention probabilities as additional features to predict evidence sentences. As a result, we use supporting evidence sentences to provide direct supervision on which tokens the LM should attend to during finetuning, which in turn helps improve relation extraction in a joint training framework. To further help LMs focus on a smaller set of relevant word context from a long document, we also introduce entity-guided input sequences as the input to these models, by appending each head entity to the document text, one at a time. This allows the LM encoder to explicitly model relations involving a specific head entity while ignoring all other entity pairs, thus simplifying the task for the LM encoder. The joint training framework helps the model locate the correct semantics that are required for each relation prediction. To the best of our knowledge 1 , we are the first to present an effective joint training framework for relation extraction and evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Each of these ideas gives a significant boost in performance, and by combining them, we are able to achieve highly competitive results on the DocRED leaderboard. We obtain 62.5 relation extraction F1 and 50.5 evidence prediction F1 from our E2GRE trained RoBERTa LARGE model, which is the current state-of-the-art performance on evi-1 Based on published papers on DocRED. dence prediction. Our proposed E2GRE framework is a simple joint training approach that effectively incorporates information from evidence prediction to guide the pretrained LM encoder, boosting performance on both relation extraction and evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are summarized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to generate multiple new entityguided inputs to a pretrained language model: for every document, we concatenate every entity with the document and feed it as an input sequence to a pretrained LM encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We propose to use internal attention probabilities of the pre-trained LM encoder as additional features for the evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our joint training framework of E2GRE which receives the guidance from entity and evidence, improves the performance on both relation extraction and evidence prediction, showing that the two tasks are mutually beneficial to each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early work attempted to solve RE with statistical methods with different feature engineering [Zelenko et al., 2003; Bunescu and Mooney, 2005] . Later on, neural models have shown better performance at capturing semantic relationships between entities. These methods include CNN-based approaches [Zeng et al.; and LSTM-based approaches [Cai et al., 2016] . On top of using CNNs/LSTM encoders, previous models added additional layers to model semantic interactions. For example, Han et al. [2018] introduced using hierarchical attentions in order to generate relational information from coarse-tofine semantic ideas; Zhang et al. [2017] applied GCNs over pruned dependency trees, and Guo et al. [2019] introduced Attention Guided Graph Convolutional Networks (AG-GCNs) over dependency trees. These models have shown good performance on intra-sentence relation extraction, but are not easily adapted for document-level RE.",
"cite_spans": [
{
"start": 93,
"end": 115,
"text": "[Zelenko et al., 2003;",
"ref_id": "BIBREF28"
},
{
"start": 116,
"end": 141,
"text": "Bunescu and Mooney, 2005]",
"ref_id": "BIBREF1"
},
{
"start": 295,
"end": 308,
"text": "[Zeng et al.;",
"ref_id": "BIBREF29"
},
{
"start": 335,
"end": 353,
"text": "[Cai et al., 2016]",
"ref_id": "BIBREF2"
},
{
"start": 477,
"end": 494,
"text": "Han et al. [2018]",
"ref_id": "BIBREF8"
},
{
"start": 615,
"end": 634,
"text": "Zhang et al. [2017]",
"ref_id": "BIBREF30"
},
{
"start": 682,
"end": 699,
"text": "Guo et al. [2019]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Many approaches for document-level RE are graph-based neural network methods. Quirk and Poon [2017] first introduced a document graph being used for document-level RE; In [Jia et al., 2019] , an entity-centric, multi-scale representation learning on entity/sentence/document-level LSTM model was proposed for document-level n-ary RE task. Christopoulou et al. [2019] recently proposed a novel edge-oriented graph model that deviates from existing graph models. Nan et al. [2020] proposed an induced latent graph and Li et al. [2020] used an explicit heterogeneous graph for DocRED. These graph models generally focus on constructing unique nodes and edges, and have the advantage of connecting and aggregating different granularities of information. Zhou et al. [2021] pointed out multi-entity and multi-label issues for documentlevel RE, and proposed two techniques: adaptive thresholding and localized context pooling, to address these problems.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "Quirk and Poon [2017]",
"ref_id": "BIBREF15"
},
{
"start": 171,
"end": 189,
"text": "[Jia et al., 2019]",
"ref_id": "BIBREF9"
},
{
"start": 339,
"end": 366,
"text": "Christopoulou et al. [2019]",
"ref_id": "BIBREF3"
},
{
"start": 461,
"end": 478,
"text": "Nan et al. [2020]",
"ref_id": "BIBREF13"
},
{
"start": 516,
"end": 532,
"text": "Li et al. [2020]",
"ref_id": "BIBREF10"
},
{
"start": 750,
"end": 768,
"text": "Zhou et al. [2021]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pretrained Language Models [Radford et al., 2019; Devlin et al., 2019; are powerful NLP tools trained with enormous amounts of unlabelled data. In order to take advantage of the large amounts of text that these models have seen, finetuning on large pretrained LMs has been shown to be effective on relation extraction [Wadden et al., 2019] . Generally, large pretrained LMs are used to encode a sequence and then generate the representation of a head/tail entity pair to learn a classification [Eberts and Ulges, 2019; Yao et al., 2019] . Baldini Soares et al.",
"cite_spans": [
{
"start": 27,
"end": 49,
"text": "[Radford et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 50,
"end": 70,
"text": "Devlin et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 318,
"end": 339,
"text": "[Wadden et al., 2019]",
"ref_id": "BIBREF20"
},
{
"start": 494,
"end": 518,
"text": "[Eberts and Ulges, 2019;",
"ref_id": "BIBREF6"
},
{
"start": 519,
"end": 536,
"text": "Yao et al., 2019]",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "[2019] introduced a new concept similar to BERT called \"matchingthe-black\" and pretrained a Transformer-like model for relation learning. The models were finetuned on SemEval-2010 Task 8 and TACRED achieved stateof-the-art results. Our framework aims to improve the effectiveness of pretrained LMs for documentlevel relation extraction, with our entity and evidence guided approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we introduce our E2GRE framework. First, we describe how to generate entityguided inputs. Then we present how to jointly train RE with evidence prediction, and finally show how to combine this with our evidence-guided attentions. We use BERT as our pretrained LM when describing our framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The goal of relation extraction is to predict relation label between every head/tail (h/t) pair of given entities in a given document. Most standard models approach this problem by feeding in an entire document and then extracting all of the head/tail pairs Figure 2 : Diagram of our E2GRE framework. As shown in the diagram, we pass an input sequence consisting of an entity and document into BERT. We extract head and tails for relation extraction. We show the learned relation vectors in grey. We extract out sentence representation and BERT attention probabilities for evidence predictions.",
"cite_spans": [],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "to predict relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "Instead, we design entity-guided inputs to give BERT more guidance towards the entities during training. Each training input is organized by concatenating the tokens of the first mention of a head entity, denoted by H, together with the document tokens D, to form: \"[CLS]\"+ H + \"[SEP]\" + D + \"[SEP]\", which is then fed into BERT. 2 We generate these input sequences for each entity in the given document. Therefore, for a document with N e entities, N e new entity-guided input sequences are generated and fed into BERT separately.",
"cite_spans": [
{
"start": 330,
"end": 331,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "Our framework predicts N e \u2212 1 different sets of relations for each training input, corresponding to N e \u2212 1 head/tail entity pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "After passing a training input through BERT, we extract the head entity embedding and a set of tail entity embeddings from the BERT output. After obtaining the head entity embedding h \u2208 R d and all tail entity embeddings {t k |t k \u2208 R d } in an entity-guided sequence, where 1 \u2264 k \u2264 N e \u2212 1, we feed them into a bilinear layer with the sigmoid activation function to predict the probability of i-th relation between the head entity h and the k-th tail entity t k , denoted by\u0177 ik , as follow\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "y ik = \u03b4(h T W i t k + b i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "where \u03b4 is the sigmoid function, W i and b i are the learnable parameters corresponding to i-th relation, where 1 \u2264 i \u2264 N r , and N r is the number of relations. Finally, we finetune BERT with multi-label cross-entropy loss. During inference, we group the N e \u2212 1 predicted relations for each entity-guided input sequence from the same document, to obtain the final set of predictions for a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-Guided Input Sequences",
"sec_num": "3.1"
},
{
"text": "Evidence sentences are sentences which contain important facts for predicting the correct relationships between head and tail entities. Therefore, evidence prediction is a very important auxiliary task to relation extraction and also provides explainability for the model. We build our evidence prediction upon the baseline introduced by Yao et al.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "[2019], which we will describe next.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "Let N s be the number of sentences in the document. We first obtain the sentence embedding s \u2208 R N S \u00d7d by averaging all the embeddings of the words in each sentence (i.e., Sentence Extraction in Fig. 2 ). These word embeddings are derived from the BERT output embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 202,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "r i \u2208 R d be the relation embedding of i-th relation r i (1 \u2264 i \u2264 N r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": ", which is learnable and initialized randomly in our model. We employ a bilinear layer with sigmoid activation function to predict the probability of the j-th sentence s j being an evidence sentence w.r.t. the given i-th relation r i as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F i jk = s j W r i r i + b r \u00ee y i jk = \u03b4(F i jk W r o + b r o )",
"eq_num": "(2)"
}
],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "where s j represents the embedding of j-th sentence, W r i /b r i and W r o /b r o are the learnable parameters w.r.t. i-th relation. We define the loss of evidence prediction under the given i-th relation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L Evi = \u2212 1 Ne\u22121 1 Ns Ne\u22121 k=1 Ns j=1 (y i jk log(\u0177 i jk ) +(1 \u2212 y i jk ) log(1 \u2212\u0177 i jk ))",
"eq_num": "(3)"
}
],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "where y j ik \u2208 {0, 1}, and y j ik = 1 means that sentence j is an evidence for the i-th relation. It should be noted that in the training stage, we use the embedding of true relation in Eq. 2. In testing/inference stage, we use the embedding of the relation predicted by the relation extraction model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evidence Prediction",
"sec_num": "3.2.1"
},
{
"text": "In [Yao et al., 2019] the baseline relation extraction loss L RE and the evidence prediction loss are combined as the final objective function for the joint training:",
"cite_spans": [
{
"start": 3,
"end": 21,
"text": "[Yao et al., 2019]",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Joint Training",
"sec_num": "3.2.2"
},
{
"text": "L baseline = L RE + \u03bb * L Evi (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Joint Training",
"sec_num": "3.2.2"
},
{
"text": "where \u03bb > 0 is the weight factor to make tradeoffs between two losses, which is data dependent. In order to compare to our models, we utilize a BERT-baseline to predict relation extraction loss and evidence prediction loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Joint Training",
"sec_num": "3.2.2"
},
{
"text": "Pretrained language models have been shown to be able to implicitly model semantic relations internally. By looking at internal attention probabilities, Clark et al. [2019] has shown that BERT learns coreference and other semantic information in later BERT layers. In order to take advantage of this inherent property, our framework attempts to give more guidance to where correct semantics for RE are located. For each pair of head h and tail t k , we introduce the idea of using internal attention probabilities extracted from the last l internal BERT layers for evidence prediction. Let Q \u2208 R N h \u00d7L\u00d7(d/N h ) be the query and K \u2208 R N h \u00d7L\u00d7(d/N h ) be the key of the Multi-Head Self Attention layer, N h be the number of attention heads as described in [Vaswani et al., 2017] , L be the length of the input sequence and d be the embedding dimension. We first extract the output of multiheaded self attention (MHSA) A \u2208 R N h \u00d7L\u00d7L from a given layer in BERT as follows. These extraction outputs are shown as Attention Extractor in Fig. 2 .",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "Clark et al. [2019]",
"ref_id": "BIBREF4"
},
{
"start": 755,
"end": 777,
"text": "[Vaswani et al., 2017]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1032,
"end": 1038,
"text": "Fig. 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Attention = softmax( QK T \u221a d/N h )",
"eq_num": "(5)"
}
],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "Att",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "-head i = Attention(QW Q i , KW K i ) (6) A = Concat(Att-head 1 , \u2022 \u2022 \u2022 , Att-head n ) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "For a given pair of head h and tail t k , we extract the attention probabilities corresponding to head and tail tokens to help relation extraction. Specifically, we concatenate the MHSAs for the last l BERT layers extracted by Eq. 7 to form an attention probability tensor as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "\u00c3 k \u2208 R l\u00d7N h \u00d7L\u00d7L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "Then, we calculate the attention probability representation of each sentence under the given headtail entity pair (h, t k ) as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "1. We first apply maximum pooling layer along the attention head dimension (i.e., second dimension) over\u00c3 k . The max values are helpful to show where a specific attention head might be looking at. Afterwards we apply mean pooling over the last l layers. We obtai\u00f1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "A s = 1 l l i=1 maxpool(\u00c3 ki )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": ",\u00c3 s \u2208 R L\u00d7L from these two steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "2. We then extract the attention probability tensor from the head and tail entity tokens according to the start and end positions of in the document. We average the attention probabilities over all the tokens for the head and tail embeddings to obtain\u00c3 sk \u2208 R L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "3. Finally, we generate sentence representations from\u00c3 sk by averaging over the attentions of each token in a given sentence from the document to obtain a sk \u2208 R Ns",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "Once we get the attention probabilities a sk , we pass the sentence embeddingsF i k from Eq. 2 through a transformer layer to encourage intersentence interactions and form the new represen-tation\u1e90 i k . We combine a sk with\u1e90 i k and feed it into a bilinear layer with sigmoid (\u03b4) for evidence sentence prediction as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Z i k = FFN(LayerNorm(Multi-Head(F i k ))) (8) y ia k = \u03b4(a sk W a i\u1e90 i k + b a i )",
"eq_num": "(9)"
}
],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "Finally, we define the loss of evidence prediction under a given i-th relation based on attention probability representation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L a Evi = \u2212 1 Ne\u22121 1 Ns Ne\u22121 k=1 Ns j=1 (y ia jk log(\u0177 ia jk ) +(1 \u2212 y ia jk ) log(1 \u2212\u0177 ia jk )), f",
"eq_num": "(10)"
}
],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "where\u0177 ia jk is the j-th value of\u0177 ia k computed by Eq. 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Guiding BERT Attention with Evidence Prediction",
"sec_num": "3.2.3"
},
{
"text": "Here we combine the relation extraction loss and the attention guided evidence prediction loss as the final objective function for the joint training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training with Evidence Guided Attention Probabilities",
"sec_num": "3.2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L E2GRE = L e RE + \u03bb a * L a Evi",
"eq_num": "(11)"
}
],
"section": "Joint Training with Evidence Guided Attention Probabilities",
"sec_num": "3.2.4"
},
{
"text": "where \u03bb a > 0 is the weight factor to make tradeoffs between two losses, which is data dependent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Training with Evidence Guided Attention Probabilities",
"sec_num": "3.2.4"
},
{
"text": "DocRED [Yao et al., 2019 ] is a large documentlevel dataset for the tasks of relation extraction and evidence prediction. It consists of 5053 documents, 132375 entities, and 56354 relations mined from Wikipedia articles. For each (head, tail) entity pair, there are 97 different relation types as candidates to predict. The first relation type is an \"NA\" relation between two entities, and the rest correspond to a WikiData relation name. Each of the head/tail pair that contains valid relations also includes a set of evidence sentences. We follow the same setting in [Yao et al., 2019 ] to split the data into Train/Development/Test for model evaluation for fair comparisons. The number of documents in Train/Development/Test is 3000/1000/1000, respectively. The dataset is evaluated with the metrics of relation extraction RE F1, and evidence Evi F1. There are also instances where relational facts may occur in both the development and train set, so we also evaluate Ign RE F1, which removes these relational facts.",
"cite_spans": [
{
"start": 7,
"end": 24,
"text": "[Yao et al., 2019",
"ref_id": "BIBREF24"
},
{
"start": 569,
"end": 586,
"text": "[Yao et al., 2019",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Hyper-parameter Setting. The configuration for the BERT BASE model follows the setting in [Devlin et al., 2019] . We set the learning rate to 1e-5, \u03bb a to 1e-4, the hidden dimension of the relation vectors to 108, and extract internal attention probabilities from last three BERT layers.",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "[Devlin et al., 2019]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "We conduct our experiments by fine-tuning the BERT BASE model. The implementation is based on the HuggingFace [Wolf et al., 2020] PyTorch [Paszke et al., 2017] implementation of BERT 3 . The DocRED baseline and our E2GRE model have 115M parameters 4 . We implement a RoBERTa-large model for the public leaderboard. Baseline models. We compare our framework with the following published models. 1. Context Aware BiLSTM. [Yao et al., 2019] introduced the original baseline to DocRED in their paper. They used a context-aware BiLSTM (+ additional features such as entity type, coreference and ",
"cite_spans": [
{
"start": 110,
"end": 129,
"text": "[Wolf et al., 2020]",
"ref_id": "BIBREF23"
},
{
"start": 138,
"end": 159,
"text": "[Paszke et al., 2017]",
"ref_id": "BIBREF14"
},
{
"start": 419,
"end": 437,
"text": "[Yao et al., 2019]",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.2"
},
{
"text": "Dev",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Test Ign F RE F 1 Evi F 1 Ign F 1 RE F 1 Evi F 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "BiLSTM [Yao et al., 2019] 45 [Tang et al., 2020] 54.29 56.31 -53.70 55.60 -CorefBERT BASE 55.32 57.51 -54.54 56.96 -BERT-LSR BASE [Nan et al., 2020] 52.43 59.00 -56.97 59.05 -CorefRoBERTa LARGE 57 Table 1 : Main results (%) on the development and test set of DocRED. We report the official test score of the best checkpoint on the development set. Our E2GRE framework is competitive with the top of the current DocRED leaderboard, and is the best on the public leaderboard for evidence prediction.",
"cite_spans": [
{
"start": 7,
"end": 25,
"text": "[Yao et al., 2019]",
"ref_id": "BIBREF24"
},
{
"start": 29,
"end": 48,
"text": "[Tang et al., 2020]",
"ref_id": "BIBREF17"
},
{
"start": 130,
"end": 148,
"text": "[Nan et al., 2020]",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": null
},
{
"text": "distance) to encode the document. Head and tail entities are then extracted for relation extraction. 2. BERT Two-Step. [Wang et al., 2019] introduced finetuning BERT in a two-step process, where the model first does predicts the NA relation, and then predicts the rest of the relations. 3. HIN. [Tang et al., 2020] introduced using a hierarchical inference network to help aggregate the information from entity to sentence and further to document-level in order to obtain semantic reasoning over an entire document. 4. CorefBERT. introduced a way of pretraining BERT in order to encourage the model to look more at relations between the coreferences of different noun phrases. 5. BERT+LSR. [Nan et al., 2020] introduced an induced latent graph structure to help learn how the information should flow between entities and sentences within a document. 6. ATLOP. [Zhou et al., 2021] introduced adaptive thresholding and localized context pooling to help alleviate multi-label and multi-entity issues in document-level RE. Table 1 presents the main results of our proposed E2GRE framework, compared with other published results. From this table, we observe that:",
"cite_spans": [
{
"start": 119,
"end": 138,
"text": "[Wang et al., 2019]",
"ref_id": "BIBREF22"
},
{
"start": 295,
"end": 314,
"text": "[Tang et al., 2020]",
"ref_id": "BIBREF17"
},
{
"start": 690,
"end": 708,
"text": "[Nan et al., 2020]",
"ref_id": "BIBREF13"
},
{
"start": 860,
"end": 879,
"text": "[Zhou et al., 2021]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1019,
"end": 1026,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": null
},
{
"text": "\u2022 Our RE result is highly competitive with the best published models using BERT BASE model. Our proposed framework is also the only one which solves the dual task of evidence prediction, while taking advantage of evidence sentences for relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "\u2022 By replacing BERT BASE with RoBERTa LARGE , we obtain SOTA performance on the DocRED leaderboard. Our test result ranks top 3 on the public leaderboard for relation extraction, and top 1 for evidence prediction 5 , which shows that our E2GRE is both effective and mutually beneficial for relation extraction and evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "We see that our framework significantly boosts F1 scores on both relation extract and evidence prediction compared to previous BERT BASE models. Even though we do not have the state-of-the-art performance on relation extraction, we are the first paper to show that with appropriate joint training of RE and evidence prediction we can effectively improve performance for both. 6 Table 2 compares our proposed E2GRE with the joint-training BERT baseline, as described in our model section on evidence prediction. We examine the comparison under two challenging scenarios in the dev set: 1) entity pairs which consists of multiple mentions in a document; and 2) entity pairs with multiple evidence sentences for evidence prediction.",
"cite_spans": [
{
"start": 376,
"end": 377,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "From Table 2 , we observe that: E2GRE shows consistent improvement in terms of F1 on both settings. This is due to the evidence guided attention probabilities from the pretrained LM which helps extract relevant contexts from the document. These relevant contexts further benefit the relation extraction and thus result in significant F1 improvement comparing to the baseline. In summary, our implementation of evidence prediction enhances the performance of relation extraction, and the utilization of a pretrained LM's internal attention probability is a more effective way for joint training.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "To explore the contribution of different components in our E2GRE, we conduct an ablation study in Table 3 . We start off with our full E2GRE, and consecutively remove the evidence-guided attention and entity-guided sequences. From this table, we observe that: both entity-guided sequences and evidence-guided attentions play a significant role in improving F1 on relation extraction and evidence prediction: entity-guided sequences improve RE by about 2 F 1 and evidence prediction by about 3.5 F 1. Evidence-guided attentions improve RE by about 1.7 F 1 and evidence prediction by about 1 F 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "We also observe that entity-guided sequences tend to help more on precision in both tasks of RE and evidence prediction. Entity-guided sequences help by grounding the model to focus on the correct entities, allowing it to be more precise in its information extraction. In contrast, evidence-guided attentions tend to help more on recall in both tasks of RE and evidence prediction.These attentions help by giving more guidance to locate relevant contexts, therefore increasing the recall of RE and evidence prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "Relation Extraction Table 3 : Ablation study on evidence guided attentions and entity guided input sequence components, by removing attention extraction module in Figure 2 , and entity-guided input sequences consecutively on the dev set. Table 4 shows the impact of the number of BERT layers from which the attention probabilities are extracted on evidence prediction and relation extraction. We observe that using the last 3 layers is better than using the last 6 layers. This is because later layers in pretrained LMs tend to focus more on semantic information, whereas earlier layers focus more on syntactic information [Clark et al., 2019] . We hypothesize that the last 6 layers may include noisy information related to syntax.",
"cite_spans": [
{
"start": 623,
"end": 643,
"text": "[Clark et al., 2019]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 20,
"end": 27,
"text": "Table 3",
"ref_id": null
},
{
"start": 163,
"end": 171,
"text": "Figure 2",
"ref_id": null
},
{
"start": 238,
"end": 245,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Recall Precision F1",
"sec_num": null
},
{
"text": "In Fig. 3 , we plot the change in RE F1 and EVI F1 between BERT BASE -Joint Training and our E2GRE-BERT BASE . We observe that RE F1 and EVI F1 are closely linked, with a coefficient of 0.7923, showing that when EVI F1 improves, RE F1 also improves. We observe that the centroid of the points lies in the first quadrant (2.7%, 5.8%), showing the overall improvement of our model. Furthermore, we analyze the effectiveness of our E2GRE model with smaller amounts of training data. Table. 5 shows that our model achieves much larger gains on RE F1 when training with 10, 30 and 50% of the data. E2GRE-BERT BASE is able to achieve bigger improvements with less data, as attention probabilities used for evidence prediction provides a effective guidance for relation extraction.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 9,
"text": "Fig. 3",
"ref_id": "FIGREF2"
},
{
"start": 480,
"end": 486,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Evidence/Relation Interdependence",
"sec_num": "4.6"
},
{
"text": "In this paper we propose a simple, yet effective joint training framework E2GRE (Entity and Evidence Guided Relation Extraction) for relation extraction and evidence prediction on DocRED. In order to more effectively exploit pretrained LMs for document-level RE, we first generate new entityguided sequences to feed into an LM, focusing the model on the relevant areas in the document. Then we utilize the internal attentions extracted from the last few layers to help guide the LM to focus on relevant sentences for evidence prediction. Our E2GRE method improves performance on both RE and evidence prediction, and achieves the state-of-the-art performance on the DocRED public leaderboard. We show that evidence prediction is an important task that helps RE models perform better. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Since the max input length for BERT is 512, for any input length longer than 512, we make use of a sliding window approach over the input and separate it into two chunks (Do-cRED does not have documents longer than 1024): the first chunk is the input sequence up to 512 tokens; the second chunk is the input sequence with an offset, such that offset + 512 reaches the end of the sequence. This is shown as \"[CLS]\"+ H + \"[SEP]\" + D[offset:end] + \"[SEP]\". We combine these two input chunks in our model by averaging the embeddings and BERT attention probabilities of the overlapping tokens in the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/pytorch-pretrained-BERT4 We will release the code after paper review.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "At the time of the submission date6 The original DocRED paper[Yao et al., 2019] did not report improvement of RE from joint training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Matching the blanks: Distributional similarity for relation learning",
"authors": [
{
"first": "",
"middle": [],
"last": "Livio Baldini",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Soares",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. Matching the blanks: Distributional similarity for relation learning. In ACL, 2019.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A shortest path dependency kernel for relation extraction",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2005,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Raymond Mooney. A shortest path dependency kernel for relation extraction. In EMNLP, Vancouver, British Columbia, Canada, Oc- tober 2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bidirectional recurrent convolutional neural network for relation classification",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Cai, Xiaodong Zhang, and Houfeng Wang. Bidi- rectional recurrent convolutional neural network for relation classification. In ACL, Berlin, Germany, Au- gust 2016.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Connecting the dots: Document-level neural relation extraction with edge-oriented graphs",
"authors": [
{
"first": "Fenia",
"middle": [],
"last": "Christopoulou",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fenia Christopoulou, Makoto Miwa, and Sophia Ana- niadou. Connecting the dots: Document-level neu- ral relation extraction with edge-oriented graphs. In EMNLP, Hong Kong, China, November 2019.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "What does BERT look at? an analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. What does BERT look at? an analysis of BERT's attention. In ACL, 2019.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL, 2019.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Span-based joint entity and relation extraction with transformer pretraining",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Eberts",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Ulges",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "09",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Eberts and Adrian Ulges. Span-based joint entity and relation extraction with transformer pre- training. 09 2019.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Attention guided graph convolutional networks for relation extraction",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijiang Guo, Yan Zhang, and Wei Lu. Attention guided graph convolutional networks for relation ex- traction. In ACL, Florence, Italy, July 2019.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Hierarchical relation extraction with coarse-to-fine grained attention",
"authors": [
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu Han, Pengfei Yu, Zhiyuan Liu, Maosong Sun, and Peng Li. Hierarchical relation extraction with coarse-to-fine grained attention. In EMNLP, Brus- sels, Belgium, October-November 2018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Documentlevel n-ary relation extraction with multiscale representation learning",
"authors": [
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Cliff",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL, Minneapolis, Minnesota",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robin Jia, Cliff Wong, and Hoifung Poon. Document- level n-ary relation extraction with multiscale repre- sentation learning. In NAACL, Minneapolis, Min- nesota, June 2019.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Graph enhanced dual attention network for document-level relation extraction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Zhonghao",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Xi",
"suffix": ""
},
{
"first": "Shikun",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Li, Wei Ye, Zhonghao Sheng, Rui Xie, Xiangyu Xi, and Shikun Zhang. Graph enhanced dual attention network for document-level relation extraction. In Proceedings of the 28th International Conference on Computational Linguistics, 2020.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roberta",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Reasoning with latent structure refinement for document-level relation extraction",
"authors": [
{
"first": "Guoshun",
"middle": [],
"last": "Nan",
"suffix": ""
},
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Sekuli\u0107",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoshun Nan, Zhijiang Guo, Ivan Sekuli\u0107, and Wei Lu. Reasoning with latent structure refinement for document-level relation extraction. In ACL, 2020.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Distant supervision for relation extraction beyond the sentence boundary",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk and Hoifung Poon. Distant supervision for relation extraction beyond the sentence boundary. In ACL, Valencia, Spain, April 2017. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language mod- els are unsupervised multitask learners. 2019.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hin: Hierarchical inference network for document-level relation extraction",
"authors": [
{
"first": "Hengzhu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yanan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiangxia",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Shi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2020,
"venue": "PAKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hengzhu Tang, Yanan Cao, Zhenyu Zhang, Jiangxia Cao, Fang Fang, Shi Wang, and Pengfei Yin. Hin: Hierarchical inference network for document-level relation extraction. In PAKDD, 2020.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural relation extraction for knowledge base enrichment",
"authors": [
{
"first": "Gerhard",
"middle": [],
"last": "Bayu Distiawan Trisedya",
"suffix": ""
},
{
"first": "Jianzhong",
"middle": [],
"last": "Weikum",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bayu Distiawan Trisedya, Gerhard Weikum, Jianzhong Qi, and Rui Zhang. Neural relation extraction for knowledge base enrichment. In ACL, Florence, Italy, July 2019. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 2017.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Entity, relation, and event extraction with contextualized span representations",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Ulme",
"middle": [],
"last": "Wennberg",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. Entity, relation, and event extrac- tion with contextualized span representations. In EMNLP, Hong Kong, China, November 2019.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Relation classification via multi-level attention CNNs",
"authors": [
{
"first": "Linlin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gerard",
"middle": [],
"last": "De Melo",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linlin Wang, Zhu Cao, Gerard de Melo, and Zhiyuan Liu. Relation classification via multi-level atten- tion CNNs. In ACL, Berlin, Germany, August 2016. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Fine-tune bert for docred with two-step process",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christfried",
"middle": [],
"last": "Focke",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Sylvester",
"suffix": ""
},
{
"first": "Nilesh",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, and William Wang. Fine-tune bert for do- cred with two-step process, 2019.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gugger",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Hug- gingface's transformers: State-of-the-art natural lan- guage processing, 2020.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "DocRED: A large-scale document-level relation extraction dataset",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lixin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. DocRED: A large-scale document-level relation extraction dataset. In ACL, 2019.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Coreferential reasoning learning for language representation",
"authors": [
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jiaju",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deming Ye, Yankai Lin, Jiaju Du, Zhenghao Liu, Maosong Sun, and Zhiyuan Liu. Coreferential rea- soning learning for language representation. ArXiv, abs/2004.06870, 2020.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Augmenting end-to-end dialog systems with commonsense knowledge",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Erik",
"middle": [
"Cambria"
],
"last": "Cambria",
"suffix": ""
},
{
"first": "Iti",
"middle": [],
"last": "Chaturvedi",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Subham",
"middle": [],
"last": "Biswas",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Young, Erik Cambria Cambria, Iti Chaturvedi, Minlie Huang, Hao Zhou, and Subham Biswas. Augmenting end-to-end dialog systems with com- monsense knowledge. In AAAI, 2018.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Improved neural relation detection for knowledge base question answering",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kazi Saidul Hasan",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mo Yu, Wenpeng Yin, Kazi Saidul Hasan, Cicero dos Santos, Bing Xiang, and Bowen Zhou. Improved neural relation detection for knowledge base ques- tion answering. In ACL, July 2017.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. Kernel methods for relation extrac- tion. Journal of Machine Learning Research, 3:1083-1106, 08 2003.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": null,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. Relation classification via convolu- tional deep neural network. In COLING.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Position-aware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP 2017), 2017.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Improving relation classification by entity pair graph",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Huaiyu",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Youfang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "101",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Zhao, Huaiyu Wan, Jianwei Gao, and Youfang Lin. Improving relation classification by entity pair graph. In Wee Sun Lee and Taiji Suzuki, editors, ACML, volume 101, Nagoya, Japan, 2019.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Document-level relation extraction with adaptive thresholding and localized context pooling",
"authors": [
{
"first": "Wenxuan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": null,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenxuan Zhou, Kevin Huang, Tengyu Ma, and Jing Huang. Document-level relation extraction with adaptive thresholding and localized context pooling. In AAAI, 2021.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Plot showing the change in RE F1 and EVI F1 from BERT BASE -Joint Training to our E2GRE-BERT BASE model for each document in the dev set.",
"uris": null
},
"TABREF2": {
"text": "Joint Training 52.42 43.88 47.77 51.20 37.55 43.33 E2GRE-BERT BASE 55.84 47.75 51.47 53.04 40.78 46.11 Evidence Predictions BERT BASE -Joint Training 42.59 31.21 36.02 40.44 34.68 37.34 E2GRE-BERT BASE 42.04 37.78 39.79 38.34 40.83 39.54",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Models</td><td colspan=\"3\">Multi-Mention</td><td colspan=\"3\">Multi-Evidence</td></tr><tr><td/><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td></tr><tr><td>Relation Extraction</td><td/><td/><td/><td/><td/></tr><tr><td>BERT BASE -</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF3": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: Analysis of how Evidence Prediction (EP)</td></tr><tr><td>impact on Relation Extraction (RE) in the joint train-</td></tr><tr><td>ing framework. Results on recall, precision and F1 are</td></tr><tr><td>shown on the dev set with BERT base model.</td></tr></table>"
},
"TABREF6": {
"text": "Analysis on the number of BERT layers for relation extraction and evidence prediction. Results are shown on dev set.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td>10%</td><td>30%</td><td>50%</td></tr><tr><td>Relation Extraction</td><td/><td/><td/></tr><tr><td colspan=\"4\">BERTBASE-Joint Training 40.00 47.12 52.88</td></tr><tr><td>E2GRE-BERTBASE</td><td colspan=\"3\">47.37 53.48 56.55</td></tr><tr><td>Evidence Prediction</td><td/><td/><td/></tr><tr><td colspan=\"4\">BERTBASE-Joint Training 21.15 30.70 38.25</td></tr><tr><td>E2GRE-BERTBASE</td><td colspan=\"3\">36.27 41.92 44.82</td></tr></table>"
},
"TABREF7": {
"text": "Analysis on how our E2GRE model performs on 10%, 30%, and 50% data for relation extraction.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}