ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.115.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:52:11.785070Z"
},
"title": "A Hierarchical Entity Graph Convolutional Network for Relation Extraction across Documents",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hwee",
"middle": [
"Tou"
],
"last": "Ng",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations. In this work, we propose cross-document relation extraction, where the two entities of a relation tuple appear in two different documents that are connected via a chain of common entities. Following this idea, we create a dataset for two-hop relation extraction, where each chain contains exactly two documents. Our proposed dataset covers a higher number of relations than the publicly available sentencelevel datasets. We also propose a hierarchical entity graph convolutional network (HEGCN) model for this task that improves performance by 1.1% F1 score on our two-hop relation extraction dataset, compared to some strong neural baselines.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Distantly supervised datasets for relation extraction mostly focus on sentence-level extraction, and they cover very few relations. In this work, we propose cross-document relation extraction, where the two entities of a relation tuple appear in two different documents that are connected via a chain of common entities. Following this idea, we create a dataset for two-hop relation extraction, where each chain contains exactly two documents. Our proposed dataset covers a higher number of relations than the publicly available sentencelevel datasets. We also propose a hierarchical entity graph convolutional network (HEGCN) model for this task that improves performance by 1.1% F1 score on our two-hop relation extraction dataset, compared to some strong neural baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The idea of distant supervision (Mintz et al., 2009) eliminates the need for manual annotation for obtaining training data for relation extraction. Previously, this idea is used mostly to create sentencelevel datasets. However, the assumption of distant supervision, that the two entities of a tuple must appear in the same sentence, is overly strict. We may not find an adequate number of evidence sentences for many relations as both entities do not appear in the same sentence. The relation extraction models built on such data can find relations only for a small number of relations and the relations of most knowledge bases (KBs) will be out of the reach of such models.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "(Mintz et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address this issue, we propose a multi-hop relation extraction task where the subject and object entities of a tuple can appear in two different documents, and these two documents are connected via some common entities. We can create a chain of entities from the subject entity to the object entity of a tuple via the common entities across multiple documents. Each link in this chain represents a relation between the entities located at the endpoints of the link. We can determine the relation between the subject and object entities of a tuple by following this chain of relations. This approach can give training instances for more relations than sentence-level distant supervision. Following the proposed multi-hop approach, we create a two-hop relation extraction dataset for the task. Each instance of this dataset has two documents, where the first document contains the subject entity and the second document contains the object entity of a tuple. These two documents are connected via at least one common entity. This idea can be extended to create an N-hop dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also propose a hierarchical entity graph convolutional network (HEGCN) model for the task. Our proposed model has two levels of graph convolutional networks (GCNs). The first-level GCN of the hierarchy is applied to the entity mention level graph of every document to capture the relations among the entity mentions within a document. The second-level GCN of the hierarchy is applied on a unified entity-level graph, which is built using all the unique entities present in the document chain. This entity-level graph can be built on the document chain of any length and it can capture the relations among the entities across the multiple documents in the chain. Our proposed HEGCN model improves the performance on our two-hop dataset. To summarize, the following are the contributions of this paper:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) We propose a multi-hop relation extraction task and create a two-hop dataset. This dataset has more relations than other popular distantly supervised sentence-level or document-level relation extraction datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) We propose a novel hierarchical entity graph convolutional network (HEGCN) for multi-hop relation extraction. Our proposed model improves the F1 score by 1.1% on our two-hop dataset, compared to strong neural baselines 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-hop relation extraction can be defined as follows. Consider two entities, a subject entity e s and an object entity e o , and a chain of documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formalization",
"sec_num": "2"
},
{
"text": "D = {D s \u2192 D 1 \u2192 D 2 \u2192 ... \u2192 D n \u2192 D o } where e s \u2208 D s and e o \u2208 D o . There exists a chain of entities e s \u2192 c 1 \u2192 c 2 \u2192 ... \u2192 c n+1 \u2192 e o where c 1 \u2208 {D s , D 1 }, c 2 \u2208 {D 1 , D 2 }, ..., c n+1 \u2208 {D n , D o }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formalization",
"sec_num": "2"
},
{
"text": "The task is to find the relation between e s and e o from a pre-defined set of relations R \u222a {None}, where R is the set of relations and None indicates that none of the relations in R holds between e s and e o . A simpler version of this task is two-hop relation extraction where D s and D o are directly connected by at least one common entity. In this paper, we focus on two-hop relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formalization",
"sec_num": "2"
},
{
"text": "3 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formalization",
"sec_num": "2"
},
{
"text": "Distantly supervised datasets are very popular for relation extraction (Nayak et al., 2021) . Riedel et al. (2010) (NYT10) and Hoffmann et al. (2011) (NYT11) mapped Freebase tuples to New York Times (NYT) articles to obtain such datasets. The NYT10 and NYT11 datasets have been used extensively by researchers for relation extraction. TA-CRED (Zhang et al., 2017) is another dataset created from the TAC KBP evaluations. FewRel 2.0 (Gao et al., 2019 ) is a few-shot relation extraction dataset. All these datasets are created at the sentence level. DocRED (Yao et al., 2019) is a document-level relation extraction dataset created using Wikipedia articles and Wikidata items. To the best of our knowledge, there does not exist any relation extraction dataset which involves multiple documents.",
"cite_spans": [
{
"start": 71,
"end": 91,
"text": "(Nayak et al., 2021)",
"ref_id": "BIBREF12"
},
{
"start": 94,
"end": 122,
"text": "Riedel et al. (2010) (NYT10)",
"ref_id": null
},
{
"start": 343,
"end": 363,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 432,
"end": 449,
"text": "(Gao et al., 2019",
"ref_id": "BIBREF3"
},
{
"start": 556,
"end": 574,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction Datasets",
"sec_num": "3.1"
},
{
"text": "Neural models have performed well on distantly supervised datasets for relation extraction. Zeng et al. (2014 Zeng et al. ( , 2015 used convolutional network with maxpooling on word embeddings for this task, whereas Shen and Huang (2016); Jat et al. (2017) ; Nayak and Ng (2019) used word-level attention model for single-instance sentence-level relation extraction. Lin et al. (2016) ; Vashishth et al. (2018) ; Ye and Ling (2019) used neural networks in a multiinstance setting to find a relation from a bag of independent sentences. Recently, graph convolutional network-based (GCN) (Kipf and Welling, 2017) models have become popular for many NLP tasks. These models work on non-linear graph structures. ; Vashishth et al. (2018) ; Guo et al. (2019) ; Zeng et al. (2020) used graph convolution networks for relation extraction. They consider each token in a sentence as a node in the graph and use a syntactic dependency tree to create a graph structure among the nodes. Recently, neural joint extraction approaches (Takanobu et al., 2019; Nayak and Ng, 2020) were proposed for this task.",
"cite_spans": [
{
"start": 92,
"end": 109,
"text": "Zeng et al. (2014",
"ref_id": "BIBREF27"
},
{
"start": 110,
"end": 130,
"text": "Zeng et al. ( , 2015",
"ref_id": "BIBREF26"
},
{
"start": 216,
"end": 238,
"text": "Shen and Huang (2016);",
"ref_id": "BIBREF17"
},
{
"start": 239,
"end": 256,
"text": "Jat et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 259,
"end": 278,
"text": "Nayak and Ng (2019)",
"ref_id": "BIBREF13"
},
{
"start": 367,
"end": 384,
"text": "Lin et al. (2016)",
"ref_id": "BIBREF10"
},
{
"start": 387,
"end": 410,
"text": "Vashishth et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 586,
"end": 610,
"text": "(Kipf and Welling, 2017)",
"ref_id": "BIBREF8"
},
{
"start": 710,
"end": 733,
"text": "Vashishth et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 736,
"end": 753,
"text": "Guo et al. (2019)",
"ref_id": "BIBREF4"
},
{
"start": 756,
"end": 774,
"text": "Zeng et al. (2020)",
"ref_id": "BIBREF28"
},
{
"start": 1020,
"end": 1043,
"text": "(Takanobu et al., 2019;",
"ref_id": "BIBREF20"
},
{
"start": 1044,
"end": 1063,
"text": "Nayak and Ng, 2020)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction Models",
"sec_num": "3.2"
},
{
"text": "Welbl et al. 2018proposed a multi-hop QA dataset (WikiHop) where the answer can only be found using more than one document. Several neural models have been proposed (Song et al., 2018; De Cao et al., 2019; Kundu et al., 2019) to solve this task. We have created a two-hop relation extraction dataset (THRED) from this Wik-iHop dataset. The major difference between these two datasets is that THRED contains many None relations, whereas in the WikiHop dataset, every instance has a correct answer. Extracting the None relation is challenging, since None occurs when no relations in R exist. When the number of relations in R increases, it becomes more difficult to predict the relations. As such, we believe the multi-hop RE task is more challenging than the multi-hop QA task.",
"cite_spans": [
{
"start": 165,
"end": 184,
"text": "(Song et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 185,
"end": 205,
"text": "De Cao et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 206,
"end": 225,
"text": "Kundu et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-hop QA versus Multi-hop RE",
"sec_num": "3.3"
},
{
"text": "We create a two-hop relation extraction dataset from a multi-hop question-answering (QA) dataset WikiHop (Welbl et al., 2018) . Welbl et al. (2018) defined the multi-hop QA task as follows: Given a set of supporting documents D s and a set of candidate answers C a which are mentioned in D s , the goal is to find the correct answer a * \u2208 C a for a question by drawing on the supporting documents. They used Wikipedia articles and Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) tuples for creating this dataset. Each positive tuple (e s , e o , r p ) in Wikidata has two entities, a subject entity e s and an object entity e o , and a positive relation r p between the subject and object entity. The questions are created by combining the subject entity e s and the relation r p , and the object entity e o is the correct answer a * for a given question. The other candidate answers are carefully chosen from Wikidata entities so that they have a similar type as the correct answer. The supporting documents are chosen in such a way that at least two documents are needed to find the correct answer. This means the subject entity e s and the object entity e o do not appear in the same document. They used a bipartite graph partition technique to create the dataset. In this bipartite graph, vertices on one side correspond to Wikidata entities, and vertices on the other side correspond to Wikipedia articles. An edge is created between an entity vertex and a document vertex if this document contains the entity. As we traverse the graph starting from vertex e s , it visits many document vertices and entity vertices. This constitutes the supporting document set and candidate answer set. If the candidate answer set does not contain the object entity e o which is the correct answer, this instance is discarded. They also limited the length of the traversal to three documents. Welbl et al. (2018) only released the supporting documents, questions, and candidate answers for their dataset. They did not release the connecting entities.",
"cite_spans": [
{
"start": 105,
"end": 125,
"text": "(Welbl et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 128,
"end": 147,
"text": "Welbl et al. (2018)",
"ref_id": "BIBREF23"
},
{
"start": 440,
"end": 470,
"text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 1875,
"end": 1894,
"text": "Welbl et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "4"
},
{
"text": "We convert this WikiHop dataset into a two-hop relation extraction dataset. The subject entities and the candidate entities can be easily found in the documents using string matching. We use a named entity recognizer from spaCy 2 to find the other entities in the documents and these entities can link these documents. We find that most of the WikiHop question-answer instances are two-hop instances. That means for most of the instances of WikiHop dataset, there is at least one document pair in the supporting document set where the first document of the pair contains the subject entity and the second document of the pair contains the correct answer, and these two documents in the pair are directly connected via some third entity. To simplify the multi-hop relation extraction task, we fix the hop count at 2. For every instance of the WikiHop dataset, we can easily find the subject entity e s and the positive relation r p from the question. The correct answer a * is the object entity of a 2 https://spacy.io/ positive tuple. (e s , a * , r p ) is the positive tuple for relation extraction. For any other candidate answer e w \u2208 C a \u2212 {a * }, the entity pair (e s , e w ) is considered as a None tuple if there exists no relation among the four pairs (e s , e w ), (e w , e s ), (e w , e o ), and (e o , e w ) in Wikidata. We check for the no relation condition for these four entity pairs involving e w , e s , and e o to reduce the distant supervision noise in the dataset for None tuples. We create a None candidate set C n with each e w \u2208 C a \u2212 {a * }. We first find all possible pairs of documents from the supporting document set D s such that the first document of the pair contains the subject entity e s and the second document of the pair contains either the entity a * or one of the entities from C n . We discard those pairs of documents that do not contain any common entity. The document pairs where the second document contains the entity a * are considered as a document chain for the positive tuple (e s , a * , r p ) where r p \u2208 R. All other document pairs where the second document contains an entity from the set C n are considered as a document chain for None tuple (e s , e w , N one) where e w \u2208 C n . In this way, using distant supervision, we can create a dataset for two-hop relation extraction. Each instance of this dataset has a chain of documents D = {D s \u2192 D o } of length 2 that is the textual source of a tuple (e s , e o , r). The document D s contains the subject entity e s and the document D o contains the object entity e o . The two documents are connected with at least one common entity c. There exists at least one entity chain e s \u2192 c \u2192 e o in the document chain. The goal is to find the relation r between e s and e o from the set R \u222a {None}. We refer to this two-hop dataset as THRED (two-hop relation extraction dataset) in the remaining sections of this paper. We manually checked 100 randomly selected positive samples and 100 randomly selected negative samples, and found that 76% of the selected positive samples and 82% of the selected negative samples are accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Construction",
"sec_num": "4"
},
{
"text": "The training, validation, and test data of the Wiki-Hop dataset are created using distant supervision, but the validation and test data are manually verified. WikiHop test data is blind and not released. So we use their validation data to create the test data for our task and use their training data for our training and validation purposes. We include the statistics of our two-hop relation extraction dataset Question located in administrative entity Zoo Lake Candidates Gauteng, Tanzania Answer Gauteng",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Statistics",
"sec_num": "4.1"
},
{
"text": "Zoo Lake is a popular lake and public park in Johannesburg , South Africa . It is part of the Hermann Eckstein Park and is opposite the Johannesburg Zoo . The Zoo Lake consists of two dams , an upper feeder dam , and a larger lower dam , both constructed in natural marshland watered by the Parktown Spruit .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doc1",
"sec_num": null
},
{
"text": "Johannesburg is the largest city in South Africa and is one of the 50 largest urban areas in the world . It is the provincial capital of Gauteng , which is the wealthiest province in South Africa .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Doc2",
"sec_num": null
},
{
"text": "Mozambique is a country in Southeast Africa bordered by the Indian Ocean to the east , Tanzania to the north , Malawi and Zambia to the northwest , Zimbabwe to the west , and Swaziland and South Africa to the southwest . The tuple (Doc1, Zoo Lake, Doc2, Gauteng, located in administrative entity) constitutes a positive instance in the THRED dataset. The tuple (Doc1, Zoo Lake, Doc3, Tanzania, None) constitutes a negative instance in the THRED dataset. in Table 2 . We include the statistics on the number of common entities present in the two documents of a chain in Table 3 . We split the training data randomly, with 90% for training and 10% for validation. From Table 2 , we see that the dataset contains a much higher number of None tuples than the positive tuples. So we randomly select None tuples so that the number of None tuples is the same as the number of positive tuples for training and validation. For evaluation, we consider the entire test dataset. From Table 4 , we see that our THRED dataset contains more relations than any other distantly supervised relation extraction datasets such as the New York Times (Riedel et al., 2010; Hoffmann et al., 2011) or DocRED (Yao et al., 2019) .",
"cite_spans": [
{
"start": 1128,
"end": 1149,
"text": "(Riedel et al., 2010;",
"ref_id": "BIBREF16"
},
{
"start": 1150,
"end": 1172,
"text": "Hoffmann et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 1183,
"end": 1201,
"text": "(Yao et al., 2019)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 457,
"end": 464,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 569,
"end": 576,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 667,
"end": 674,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 972,
"end": 979,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Doc3",
"sec_num": null
},
{
"text": "We propose a hierarchical entity graph convolutional network (HEGCN) for multi-hop relation #Document chains #Common entities Train Test 1 92,140 3,615 2 36,275 1,161 3 10,824 374 4 3,170 113 \u22655 1,497 57 extraction. We encode the documents in a document chain using a bi-directional long short-term memory (BiLSTM) layer (Hochreiter and Schmidhuber, 1997) . On top of the BiLSTM layer, we use two graph convolutional networks (GCN), one after another in a hierarchy. In the first level of the GCN hierarchy, we construct a separate entity mention graph on each document of the chain using all the entities mentioned in that document. Each mention of an entity in a document is considered as a separate node in the graph. We use a graph convolutional network (GCN) to represent the entity mention graph of each document to capture the relations among the entity mentions in the document. We then construct a unified entity-level graph across all the documents in the chain. Each node of this entity-level graph represents a unique entity in the document chain. Each common entity between two documents in the chain is represented by a single node in the graph. We use a GCN to represent this entity-level graph to capture the relations among the entities across the documents. We concatenate the representations of the nodes of the subject entity and object entity and pass it to a feed-forward layer with softmax for relation classification.",
"cite_spans": [
{
"start": 336,
"end": 370,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 109,
"end": 209,
"text": "#Common entities Train Test 1 92,140 3,615 2 36,275 1,161 3 10,824 374 4 3,170 113 \u22655",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Proposed HEGCN Model",
"sec_num": "5"
},
{
"text": "We use two types of embedding vectors: (1) word embedding vector w \u2208 R dw (2) entity token indicator embedding vector z \u2208 R dz , which indicates if a word belongs to the subject entity, object entity, or common entities. The subject and object entities are assigned the embedding index of 2 and 3, respectively. The common entities in the document chain are assigned embedding index in an increasing order starting from index 4. The same entities present in two documents in the chain get the same embedding index. Embedding index 0 is used for padding and 1 is used for all other tokens in the documents. A document is represented using a sequence of vectors {x 1 , x 2 , ....., x n } where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Documents Encoding Layer",
"sec_num": "5.1"
},
{
"text": "x t = w t z t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Documents Encoding Layer",
"sec_num": "5.1"
},
{
"text": "represents the concatenation of vectors and n is the document length. We concatenate all documents in a chain sequentially by using a document separator token. These token vectors are passed to a BiLSTM layer to capture the interaction among the documents in a chain. \u2212 \u2192 h t \u2208 R (dw+dz) and \u2190 \u2212 h t \u2208 R (dw+dz) are the output at the tth step of the forward LSTM and backward LSTM respectively. We concatenate them to obtain the tth BiLSTM output h t \u2208 R 2(dw+dz) .",
"cite_spans": [
{
"start": 280,
"end": 287,
"text": "(dw+dz)",
"ref_id": null
},
{
"start": 304,
"end": 311,
"text": "(dw+dz)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Documents Encoding Layer",
"sec_num": "5.1"
},
{
"text": "Kipf and Welling (2017) proposed graph convolutional networks (GCN) which work on graph structures. Here, we describe the GCN which is used in our model. We represent a graph G with m nodes using an adjacency matrix A of size m \u00d7 m. If there is an edge between node i and node j, then A ij = A ji = 1. We also add self loops, A ii = 1, in the graph G. We normalize the adjacency matrix A by using symmetric normalization proposed by Kipf and Welling (2017) . A diagonal node degree matrix D of size m \u00d7 m is used in the normalization of A. deg(v i ) is the number of edges that are connected to the node v i in G and\u00c2 is the corresponding normalized adjacency matrix of G. Each node of the graph receives the hidden representation of its neighboring nodes from the (l \u2212 1)th layer and uses the following operation to update its own hidden representation.",
"cite_spans": [
{
"start": 433,
"end": 456,
"text": "Kipf and Welling (2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Entity Graph Convolutional Layers",
"sec_num": "5.2"
},
{
"text": "D \u2212 1 2 ij = \uf8f1 \uf8f2 \uf8f3 1 \u221a deg(v i ) if i = j 0 otherwis\u00ea A = D \u2212 1 2 AD \u2212 1 2 g l i = ReLU( m j=1\u00c2 ij W l g l\u22121 j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Entity Graph Convolutional Layers",
"sec_num": "5.2"
},
{
"text": "W l is the trainable weight matrix of the lth layer of the GCN, g l i is the representation of the ith node of the graph at the lth layer. If g l i has the dimension of d g , then the dimension of the weight matrix W l is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Entity Graph Convolutional Layers",
"sec_num": "5.2"
},
{
"text": "d g \u00d7 d g . g 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Entity Graph Convolutional Layers",
"sec_num": "5.2"
},
{
"text": "i is the initial input to the GCN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Entity Graph Convolutional Layers",
"sec_num": "5.2"
},
{
"text": "We construct an entity mention graph (EMG) for each document in the chain on top of the document encoding layer. An entity string may appear at multiple locations in a document and each appearance is considered as an entity mention. We add a node in the graph for each entity mention. We connect two entity mention nodes if they appear in the same sentence (EMG type 1 edge). We assume that since they appear in the same sentence, there may exist some relation between them. We also connect two entity mention nodes if the strings of the two entity mentions are identical (EMG type 2 edge). Let e 1 , . . . , e l be the sequence of entity mention nodes listed in the order of their appearance in a document. We connect nodes e i and e i+1 (1 \u2264 i < l) with an edge (EMG type 3 edge). EMG type 3 edges create a linear chain of the entity mentions and ensure that the graph is connected. We use a graph convolutional network on this graph topology to capture the relations among the entity mentions in a document. We obtain the initial representations of the entity mention nodes from the hidden representations of the document encoding layer. We concatenate the hidden vector of the first token of an entity mention, the hidden vector of its last token, and a context vector to obtain the entity mention node representation. The context vector is obtained using an attention mechanism on the tokens of the sentence in which the entity mention appears. 2(dw+dz) and h e \u2208 R 2(dw+dz) are the hidden vectors from the document encoding layer of the first and last token of an entity mention. W \u2208 R 4(dw+dz)\u00d72(dw+dz) is a trainable weight matrix, h t \u2208 R 2(dw+dz) is the hidden vector of the tth token of the sentence in which the entity mention is located, and a t is the normalized attention score for the tth token with respect to the entity mention. k is the length of the sentence in which the entity mention is located, and c \u2208 R 2(dw+dz) is the context vector. The entity mention node vector q \u2208 R 6(dw+dz) of the ith node in the graph is passed to the GCN as g 0",
"cite_spans": [
{
"start": 1450,
"end": 1458,
"text": "2(dw+dz)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Mention Graph Layer",
"sec_num": "5.2.1"
},
{
"text": "p = h b h e , s t = tanh(p T W)h t a = softmax([s 1 s 2 . . . s k ] T ) c = k t=1 a t h t , q = p c h b \u2208 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Mention Graph Layer",
"sec_num": "5.2.1"
},
{
"text": "i . The parameters of this GCN are shared across the documents in a chain. This layer of the model is referred to as entity mention-level graph convolutional network or EMGCN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Mention Graph Layer",
"sec_num": "5.2.1"
},
{
"text": "We construct a unified entity graph (EG) on top of the entity mention graphs. First, we construct an entity graph for each document, where each unique entity string is represented as an entity node in the graph. We add an edge between two entity nodes if the strings of the two entities appear together in at least one sentence in the document (EG type 1 edge). We also form a sequence of entity nodes based on the order of appearance of the entities in a document, where only the first occurrence of multiple occurrences of an entity is kept in the sequence. We connect two consecutive entity nodes in the sequence with an edge (EG type 2 edge). This ensures that the entire entity graph remains connected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Graph Layer",
"sec_num": "5.2.2"
},
{
"text": "We construct one entity graph for each document in the document chain. We unify the entity graphs of multiple documents by merging the nodes of common entities between them. The unified entity graph contains all the nodes from the multiple entity graphs, but the common entity nodes which appear in two entity graphs are merged into one node in the unified graph. There is an edge between two entity nodes in the unified entity graph if there exists an edge between them in any of the entity graphs of the documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Graph Layer",
"sec_num": "5.2.2"
},
{
"text": "We obtain the initial representations of the entity nodes from the GCN outputs of the entity mention graphs. For the common entities between two documents, we average the GCN outputs of the entity mention nodes that have an identical string as the entity from the entity mention graphs of the two documents. For other entity nodes that appear only in one document, we average the GCN outputs of the entity mention nodes that have an identical string as the entity from the entity mention graph of that document. Each entity vector is passed to another graph convolutional network as g 0 i which represents the initial representation of the ith entity node in the unified entity graph. We use a graph convolutional network on this graph topology to capture the relations among the entities across the documents in the document chain. This layer of the model is referred to as entity-level graph convolutional network or EGCN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Graph Layer",
"sec_num": "5.2.2"
},
{
"text": "We concatenate the EGCN outputs of the nodes corresponding to the subject entity e s \u2208 R 6(dw+dz) and object entity e o \u2208 R 6(dw+dz) , and pass the concatenated vector to a feed-forward network (FFN) with softmax to predict the normalized probabilities for the relation labels. dw+dz) is the weight matrix, b r \u2208 R |R|+1 is the bias vector of the FFN, and r is the vector of normalized probabilities of relation labels.",
"cite_spans": [
{
"start": 278,
"end": 284,
"text": "dw+dz)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Classifier",
"sec_num": "5.3"
},
{
"text": "r = softmax(W r (e s || e o ) + b r ) W r \u2208 R (|R|+1)\u00d712(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Classifier",
"sec_num": "5.3"
},
{
"text": "We implement four neural baseline models for comparison with our proposed HEGCN model. Similar to our proposed model, we represent the tokens in the documents using pre-trained word embedding Figure 2 : The graph construction process for the positive instance in Table 1 . The entity mention graph and entity graph on the left are for Doc1. The entity mention graph and entity graph on the right are for Doc2. The numbers in square brackets ([x] ) in the entity mention graph are used to distinguish the entity mentions with identical string. Type x/y means this edge can be of both type x and type y. The 'EMG' and 'EG' prefixes are omitted from the labels of the edges in the entity mention graph and entity graph respectively. The unified entity graph is shown in the middle. Nodes in the red box are part of the entity graph of the document containing the subject entity Zoo Lake. Nodes in the blue box are part of the entity graph of the document containing the object entity Gauteng. Common entities are marked in orange color.",
"cite_spans": [
{
"start": 441,
"end": 445,
"text": "([x]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 2",
"ref_id": null
},
{
"start": 263,
"end": 270,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "vectors and entity token indicator vectors. We use a document separator token when concatenating the vectors of two documents in a chain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "(1) CNN: We apply the convolution operation on the sequence of token vectors with different kernel sizes. A max-pooling operation is applied to choose the features from the outputs of the convolution operation. This feature vector is passed to a feed-forward layer with softmax to classify the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "(2) BiLSTM: The token vectors of the document chain are passed to a BiLSTM layer to encode its meaning. We obtain the entity mention vectors of the subject entity and the object entity by concatenating the hidden vectors of their first and last token. We average the entity mention tokens of the corresponding entity to obtain the representation of the subject entity and the object entity. These two vectors are concatenated and passed to a feed-forward layer with softmax to find the relation between them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "(3) BiLSTM CNN: This is a combination of the BiLSTM and CNN model described above. The token vectors of the documents are passed to a BiL-STM layer and then we use the convolution operation with max-pooling with different convolutional kernel sizes on the hidden vectors of the BiLSTM layer. The feature vector obtained from the maxpooling operation is passed to a feed-forward layer with softmax to classify the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "(4) LinkPath: This model uses the explicit paths (Kundu et al., 2019) from the subject entity e s to the object entity e o via the common entities to find the relation. As we consider only two-hop relations, each path from e s to e o will be of the form e s \u2192 c \u2192 e o , where c is a common entity. Since there can be multiple common entities between two documents and these common entities as well as the subject and object entities can appear multiple times in the two documents, there exist multiple paths from e s to e o . Each path is formed with four entity mentions: (i) entity mentions of the subject entity and common entity in the first document. (ii) entity mentions of the common entity and object entity in the second document. We concatenate the BiLSTM hidden vectors of the start and end token of an entity mention to obtain its representation. Each path is constructed by concatenating all the four entity mentions of the path. This can be extended from two-hop to multi-hop relations by using a recurrent neural network that takes the path entity mentions as input, and outputs the hidden representation of the path. We average the vector representations of all the paths and pass it to a feed-forward layer with softmax to find the relation.",
"cite_spans": [
{
"start": 49,
"end": 69,
"text": "(Kundu et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "6.1"
},
{
"text": "We use GloVe (Pennington et al., 2014) word embeddings of dimension d w which is set to 300 in our experiments, and update the embeddings during training. We set the dimension d z to be 20 for the entity token indicator embedding vectors. The hidden vector dimension of the forward and backward LSTM is set at 320. The dimension of BiLSTM output is 640. We use 500 different convolution filters with kernel width of 3, 4, and 5 for feature extraction. We use one convolutional layer in both entity mention-level GCN and entity-level GCN in our final model. Dropout layers (Srivastava et al., 2014) are used in our network with a dropout rate of 0.5 to avoid overfitting. We train our models with a mini-batch size of 32 and use negative loglikelihood as our objective function. We optimize the network parameters using the Adagrad optimizer (Duchi et al., 2011) . For evaluation, we use precision, recall, and F1 score. We do not include the None relation in the evaluation. A confidence threshold that achieves the highest F1 score on the validation dataset is used to decide if the relation of a test instance belongs to the set of relations R or None.",
"cite_spans": [
{
"start": 13,
"end": 38,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF15"
},
{
"start": 572,
"end": 597,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 841,
"end": 861,
"text": "(Duchi et al., 2011)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Settings",
"sec_num": "6.2"
},
{
"text": "We include the median of five runs of the models on the THRED dataset in Table 5 . We see that adding a BiLSTM in the document encoding layer improves the performance by close to 5% in F1 score. The BiLSTM, BiLSTM CNN, and LinkPath models achieve similar F1 scores. When we add our proposed hierarchical entity graph convolutional layer on top of the BiLSTM layer, we get another 1.1% F1 score improvement over the next best BiLSTM model. We perform a statistical significance test using bootstrap resampling to compare each baseline and our HEGCN model, and have ascertained that the higher F1 score achieved by our model is statistically significant (p < 0.001). Table 5 : Performance comparison of the models on the THRED dataset. We report the median of 5 runs.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 80,
"text": "Table 5",
"ref_id": null
},
{
"start": 665,
"end": 672,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6.3"
},
{
"text": "We include the performance of our HEGCN model with different numbers of convolutional layers in the entity mention-level GCN (EMGCN) and entity-level GCN (EGCN) in Table 6 . When we increase the number of layers in either GCN, the performance of the model drops. We finally use only one convolutional layer in both EMGCN and EGCN. In Table 7 , we include the ablation study of the different types of edges in EMGCN and EGCN. Removing any type of edges reduces the F1 score. ",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 171,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 334,
"end": 341,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "6.4"
},
{
"text": "In this paper, we propose how the idea of distant supervision can be extended from sentence-level extraction to multi-hop extraction to cover more relations. We propose a general approach to create multi-hop relation extraction datasets. Following this approach, we create a two-hop relation extraction dataset that covers a higher number of relations from knowledge bases than other distantly supervised relation extraction datasets. We also propose a hierarchical entity graph convolutional network for this task. The two levels of GCN in our model help to capture the relation cues within documents and across documents. Our proposed model improves the F1 score by 1.1% on our twohop dataset, compared to a strong neural baseline, and it can be readily extended to N-hop datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The source code and data for this paper are available at https://github.com/nusnlp/MHRE.git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BAG: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Dacheng",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Cao, Meng Fang, and Dacheng Tao. 2019. BAG: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering. In NAACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Question answering by reasoning across documents with graph convolutional networks",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "De Cao",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola De Cao, Wilker Aziz, and Ivan Titov. 2019. Question answering by reasoning across documents with graph convolutional networks. In NAACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FewRel 2.0: Towards more challenging few-shot relation classification",
"authors": [
{
"first": "Tianyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP and IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. 2019. FewRel 2.0: To- wards more challenging few-shot relation classifica- tion. In EMNLP and IJCNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Attention guided graph convolutional networks for relation extraction",
"authors": [
{
"first": "Zhijiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation ex- traction. In ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Knowledgebased weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving distantly supervised relation extraction using word and entity based attention",
"authors": [
{
"first": "Sharmistha",
"middle": [],
"last": "Jat",
"suffix": ""
},
{
"first": "Siddhesh",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2017,
"venue": "AKBC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharmistha Jat, Siddhesh Khandelwal, and Partha Talukdar. 2017. Improving distantly supervised rela- tion extraction using word and entity based attention. In AKBC.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploiting explicit paths for multi-hop reading comprehension",
"authors": [
{
"first": "Souvik",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Tushar",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Sabharwal",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Souvik Kundu, Tushar Khot, Ashish Sabharwal, and Peter Clark. 2019. Exploiting explicit paths for multi-hop reading comprehension. In ACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural relation extraction with selective attention over instances",
"authors": [
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shiqi",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL and IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant supervision for relation extrac- tion without labeled data. In ACL and IJCNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep neural approaches to relation triplets extraction: A comprehensive survey",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Navonil",
"middle": [],
"last": "Majumder",
"suffix": ""
},
{
"first": "Pawan",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak, Navonil Majumder, Pawan Goyal, and Soujanya Poria. 2021. Deep neural approaches to relation triplets extraction: A comprehensive survey. Cognitive Computing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Effective attention modeling for neural relation extraction",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak and Hwee Tou Ng. 2019. Effective at- tention modeling for neural relation extraction. In CoNLL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Effective modeling of encoder-decoder architecture for joint entity and relation extraction",
"authors": [
{
"first": "Tapas",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2020,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tapas Nayak and Hwee Tou Ng. 2020. Effective mod- eling of encoder-decoder architecture for joint entity and relation extraction. In AAAI.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling relations and their mentions without labeled text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2010,
"venue": "ECML and KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions with- out labeled text. In ECML and KDD.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attentionbased convolutional neural network for semantic relation extraction",
"authors": [
{
"first": "Yatian",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yatian Shen and Xuanjing Huang. 2016. Attention- based convolutional neural network for semantic re- lation extraction. In COLING.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Exploring graph-structured passage representation for multihop reading comprehension with graph neural networks",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018. Exploring graph-structured passage representation for multi- hop reading comprehension with graph neural net- works. CoRR.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdi- nov. 2014. Dropout: a simple way to prevent neural networks from overfitting. JMLR.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A hierarchical framework for relation extraction with reinforcement learning",
"authors": [
{
"first": "Ryuichi",
"middle": [],
"last": "Takanobu",
"suffix": ""
},
{
"first": "Tianyang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiexi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019. A hierarchical framework for relation extraction with reinforcement learning. In AAAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "RESIDE: Improving distantly-supervised neural relation extraction using side information",
"authors": [
{
"first": "Shikhar",
"middle": [],
"last": "Vashishth",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Chiranjib",
"middle": [],
"last": "Sai Suman Prayaga",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Talukdar",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikhar Vashishth, Rishabh Joshi, Sai Suman Prayaga, Chiranjib Bhattacharyya, and Partha Talukdar. 2018. RESIDE: Improving distantly-supervised neural re- lation extraction using side information. In EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Wikidata: a free collaborative knowledgebase",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Kr\u00f6tzsch",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of Association for Computing Machinery.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Constructing datasets for multi-hop reading comprehension across documents",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "DocRED: A large-scale document-level relation extraction dataset",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Deming",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhenghao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lixin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, and Maosong Sun. 2019. DocRED: A large-scale document-level relation extraction dataset. In ACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distant supervision relation extraction with intra-bag and inter-bag attentions",
"authors": [
{
"first": "Zhen-Hua",
"middle": [],
"last": "Zhi-Xiu Ye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Xiu Ye and Zhen-Hua Ling. 2019. Distant supervi- sion relation extraction with intra-bag and inter-bag attentions. In NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Distant supervision for relation extraction via piecewise convolutional neural networks",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via piecewise convolutional neural networks. In EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Relation classification via convolutional deep neural network",
"authors": [
{
"first": "Daojian",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siwei",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2014,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via con- volutional deep neural network. In COLING.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Double graph based reasoning for documentlevel relation extraction",
"authors": [
{
"first": "Shuang",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Runxin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuang Zeng, Runxin Xu, Baobao Chang, and Lei Li. 2020. Double graph based reasoning for document- level relation extraction. In EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Graph convolution over pruned dependency trees improves relation extraction",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In EMNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Positionaware attention and supervised data improve slot filling",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor An- geli, and Christopher D. Manning. 2017. Position- aware attention and supervised data improve slot fill- ing. In EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "The architecture of our proposed HEGCN model. GCN in entity mention-level graph is shared across the documents in a chain. This diagram is for document chain of length 2.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"text": "Statistics of the THRED dataset.",
"html": null,
"type_str": "table"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>Dataset</td><td colspan=\"2\">|R| Dataset</td><td>|R|</td></tr><tr><td>NYT10</td><td>53</td><td>NYT11</td><td>24</td></tr><tr><td>TACRED</td><td>41</td><td>DocRED</td><td>96</td></tr><tr><td colspan=\"4\">FewRel 2.0 100 THRED 218</td></tr></table>",
"text": "Statistics of the common entities in the THRED dataset.",
"html": null,
"type_str": "table"
},
"TABREF4": {
"num": null,
"content": "<table/>",
"text": "The number of relations in various relation extraction datasets. R is the set of positive relations.",
"html": null,
"type_str": "table"
},
"TABREF7": {
"num": null,
"content": "<table/>",
"text": "The ablation study of the HEGCN model with different numbers of convolutional layers (L1 and L2) in EMGCN and EGCN.",
"html": null,
"type_str": "table"
},
"TABREF8": {
"num": null,
"content": "<table/>",
"text": "The ablation study of the different types of edges in our HEGCN model.",
"html": null,
"type_str": "table"
}
}
}
}