ACL-OCL / Base_JSON /prefixS /json /spnlp /2022.spnlp-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:34:38.615132Z"
},
"title": "Joint Entity and Relation Extraction Based on Table Labeling Using Convolutional Neural Networks",
"authors": [
{
"first": "Youmi",
"middle": [],
"last": "Ma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Tatsuya",
"middle": [],
"last": "Hiraoka",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Institute of Technology",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This study introduces a novel approach to the joint extraction of entities and relations by stacking convolutional neural networks (CNNs) on pretrained language models. We adopt table representations to model the entities and relations, casting the entity and relation extraction as a table-labeling problem. Regarding each table as an image and each cell in a table as an image pixel, we apply two-dimensional CNNs to the tables to capture local dependencies and predict the cell labels. The experimental results showed that the performance of the proposed method is comparable to those of current state-of-art systems on the CoNLL04, ACE05, and ADE datasets. Even when freezing pretrained language model parameters, the proposed method showed a stable performance, whereas the compared methods suffered from significant decreases in performance. This observation indicates that the parameters of the pretrained encoder may incorporate dependencies among the entity and relation labels during fine-tuning.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This study introduces a novel approach to the joint extraction of entities and relations by stacking convolutional neural networks (CNNs) on pretrained language models. We adopt table representations to model the entities and relations, casting the entity and relation extraction as a table-labeling problem. Regarding each table as an image and each cell in a table as an image pixel, we apply two-dimensional CNNs to the tables to capture local dependencies and predict the cell labels. The experimental results showed that the performance of the proposed method is comparable to those of current state-of-art systems on the CoNLL04, ACE05, and ADE datasets. Even when freezing pretrained language model parameters, the proposed method showed a stable performance, whereas the compared methods suffered from significant decreases in performance. This observation indicates that the parameters of the pretrained encoder may incorporate dependencies among the entity and relation labels during fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The purpose of a joint entity and relation extraction is to recognize entities and relations in a text. A task can be decomposed into two subtasks: named entity recognition (NER) and relation extraction (RE). In recent years, several researchers have built high-performance NER and RE systems based on contextualized representations (Yan et al., 2021; Zhong and Chen, 2021; Wang and Lu, 2020; Eberts and Ulges, 2020; Lin et al., 2020) . These contextualized representations obtained from pretrained language models, such as bidirectional encoder representations from transformers (BERT) Devlin et al., 2019 , have significantly improved the performance for various NLP tasks. As a result, studies on NER and RE have focused on the design of task-specific layers stacked on top of pretrained language models.",
"cite_spans": [
{
"start": 333,
"end": 351,
"text": "(Yan et al., 2021;",
"ref_id": "BIBREF27"
},
{
"start": 352,
"end": 373,
"text": "Zhong and Chen, 2021;",
"ref_id": "BIBREF29"
},
{
"start": 374,
"end": 392,
"text": "Wang and Lu, 2020;",
"ref_id": "BIBREF25"
},
{
"start": 393,
"end": 416,
"text": "Eberts and Ulges, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 417,
"end": 434,
"text": "Lin et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 587,
"end": 606,
"text": "Devlin et al., 2019",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common idea is to formulate NER and RE as table-filling problems (Miwa and Sasaki, 2014) . The core concept is to extract entities and relations by filling a table with entity labels in the diagonal cells and relation labels in the off-diagonal cells. Based on this concept, Ma et al. (2022) proposed TablERT, which is a combined system of NER and RE based on a pretrained BERT. TablERT predicts the diagonal cells sequentially and offdiagonal cells simultaneously. Although the system is simple and effective, it ignores the dependencies among predicted relation labels. As noted in Ma et al. (2022) , this does not improve the performance with label dependencies incorporated through refined decoding orders.",
"cite_spans": [
{
"start": 67,
"end": 90,
"text": "(Miwa and Sasaki, 2014)",
"ref_id": "BIBREF16"
},
{
"start": 277,
"end": 293,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
},
{
"start": 586,
"end": 602,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose TablERT-CNN, a novel NER and RE system that encodes the dependencies among the cells within the table. Our method employs two-dimensional convolutional neural networks (2D-CNNs), which are widely used neural architectures for object detection (Krizhevsky et al., 2012) . We considered each table as a 2D image and each cell as a pixel, transforming the task into a tablelabeling problem at the cell level. By applying 2D-CNNs to the output of BERT, the system is expected to implicitly perceive local information and label dependencies from neighboring cells. Notably, the range of cells to be processed is expandable by stacking multiple CNN layers, we model the dependencies among distant cells.",
"cite_spans": [
{
"start": 254,
"end": 279,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluated TablERT-CNN based on multiple benchmarks: CoNLL04 (Roth and Yih, 2004) , ACE05 (Walker et al., 2006) , and ADE (Gurulingappa et al., 2012) . The experimental results showed that the performance of the proposed method is on par with those of current state-ofart systems. We hypothesized that parameter updates during fine-tuning helped the BERT encoder capture the necessary dependencies for label predictions; thus, incorporating dependencies using the CNN became less helpful. To verify this hy-pothesis, we compared the performance of several NER and RE systems while keeping the BERT parameters frozen and updating them during fine tuning. In addition, we used different layers from which the prediction model extracts token embeddings to analyze how parameter updates within each layer contribute to the performance. As a result, TablERT-CNN still performed well while keeping the BERT parameters unchanged, whereas the performance of the other systems significantly decreased. This observation indicates the ability of the BERT architecture to consider token-and label-wise dependencies during task-specific fine tuning. The source code for the proposed system is publicly available at https://github.com/ YoumiMa/TablERT-CNN.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Roth and Yih, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 92,
"end": 113,
"text": "(Walker et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 124,
"end": 151,
"text": "(Gurulingappa et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BERT and its variants have recently achieved significant performance improvements on various NLP tasks (Devlin et al., 2019; Lan et al., 2020) . These transformer-based (Vaswani et al., 2017) encoders learn syntactic and semantic languages, generating a contextualized representation of each input token (Jawahar et al., 2019; Rogers et al., 2020) . Owing to the superiority of BERT encoders, recent studies on NER and RE have tended to focus on the design of a good prediction model that fully utilizes BERT embeddings to further improve the performance. Promising and straightforward prediction models for NER and RE have been developed. Eberts and Ulges (2020) proposed SpERT, which employs span-level representations obtained from BERT encoders for linear classification based on a negative sampling strategy during training. In addition, Zhong and Chen (2021) introduced a pipelined system, which performs span-based NER similarly to that of SpERT but re-encodes the input sentence using BERT to perform RE. In the RE model, the context and predicted entity labels are jointly encoded, enabling the computation of token-label attention. These approaches rely mainly on parameter updates in the BERT encoder during fine-tuning, where the encoder learns to capture task-specific dependencies. This study compares our system with SpERT to distinguish the dependencies captured by the encoder from those captured by the prediction model. Some studies have used NER and RE for generative NLP tasks. Li et al. (2019) cast NER and RE as a multiturn question-answering problem. They designed natural language question templates whose answers specify the entities and relations within each sentence. In addition, Paolini et al. (2021) tackled structured language prediction tasks as sequence-to-sequence translations between augmented languages. Structural information can be extracted by postprocessing the target augmented language. Huguet Cabot and Navigli (2021) followed their idea and built a translation system that auto-regressively generates linearized relation triplets, considering an input text. These approaches utilize the attention mechanism within the transformer to capture long-range dependencies; however, they tend to be computationally burdensome. Inspired by their study, we have explored ways to incorporate token and label dependencies into the prediction model. However, our goal is to develop a mechanism that is more explainable and computationally efficient.",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 125,
"end": 142,
"text": "Lan et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 169,
"end": 191,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 304,
"end": 326,
"text": "(Jawahar et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 327,
"end": 347,
"text": "Rogers et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 640,
"end": 663,
"text": "Eberts and Ulges (2020)",
"ref_id": "BIBREF1"
},
{
"start": 1499,
"end": 1515,
"text": "Li et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 1709,
"end": 1730,
"text": "Paolini et al. (2021)",
"ref_id": null
},
{
"start": 1938,
"end": 1962,
"text": "Cabot and Navigli (2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER and RE Using Contextualized Representations",
"sec_num": "2.1"
},
{
"text": "Another common approach is building a directed graph, modelling entity with spans as nodes and relations as arcs. and focused on information propagation among span pairs to obtain effective span representations for a prediction. Based on their study, Lin et al. (2020) explicitly modeled cross-task and cross-instance dependencies by introducing a predefined set of global features. Instead of manually defining the global features, Ren et al. (2021) introduced a text-to-graph extraction model that automatically captures global features based on the auto-regressive generation process of a graph. These approaches are delicately designed to involve graph propagation and beam search strategies, resulting in a relatively high complexity.",
"cite_spans": [
{
"start": 433,
"end": 450,
"text": "Ren et al. (2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER and RE Using Contextualized Representations",
"sec_num": "2.1"
},
{
"text": "Formulating the task as a table-filling problem is also a common idea (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017) . Efforts have recently been made to incorporate BERT into this framework. Wang and Lu (2020) designed separate encoders for entities and relations. To use word-word interactions captured within the BERT model, the authors leveraged the attention weights computed from BERT into a relation encoder. Yan et al. (2021) applied a partition filter to divide neurons into multiple partitions and generated taskspecific features based on a linear combination of these partitions. Moreover, Ma et al. (2022) Embeddings H (\") Output of the 1 st CNN Layer Although the model exhibited state-of-art performance when published, it was unable to leverage the interactions among the table cells, especially for RE. This study applies their system as a strong baseline and explores the effect of incorporating local dependencies at the top of BERT. Because the proposed system is an extension of TablERT, we recap this approach (Ma et al., 2022) in the following subsection.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Miwa and Sasaki, 2014;",
"ref_id": "BIBREF16"
},
{
"start": 94,
"end": 113,
"text": "Gupta et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 114,
"end": 133,
"text": "Zhang et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 209,
"end": 227,
"text": "Wang and Lu (2020)",
"ref_id": "BIBREF25"
},
{
"start": 618,
"end": 634,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
},
{
"start": 648,
"end": 651,
"text": "(\")",
"ref_id": null
},
{
"start": 1048,
"end": 1065,
"text": "(Ma et al., 2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER and RE Using Contextualized Representations",
"sec_num": "2.1"
},
{
"text": "H ($) Predictions % ( % &,( ) U-LOC",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER and RE Using Contextualized Representations",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "[ ! ; ! ][ ! ; \" ][ ! ; # ] [ \" ; ! ][ \" ; \" ][ \" ; # ] [ # ; ! ][ # ; \" ][ # ; # ] [ $ ; ! ][ $ ; \" ][ $ ; # ] [ % ; ! ][ % ; \" ][ % ; # ] [ & ; ! ][ & ; \" ][ & ; # ] &,!",
"eq_num": "($"
}
],
"section": "NER and RE Using Contextualized Representations",
"sec_num": "2.1"
},
{
"text": "TablERT (Ma et al., 2022 ) is a simple and effective method for a combined system applying both NER and RE. As shown on the right side of Figure 1, the method uses the upper triangular part of a table to represent the label spaces of NER and RE. The diagonal entries of the table are filled by entity labels, adopting the BILOU scheme to identify the beginning, inside, last words of multi-word and unit-length spans (Ratinov and Roth, 2009) . The off-diagonal entries in the table are filled with relation labels, with directions hard-coded onto each label. For each entity, the corresponding relation is annotated for all component words. For the sentence shown in Figure 1 , the relation \"LiveIn\" pointing from \"Richard Jones\" to \"Denison\" is labeled as \u2212\u2212\u2212\u2212\u2192 LIVEIN for the entries (i = 1, j = 5) and (i = 2, j = 5), corresponding to (Richard, Denison) and (Jones, Denison), respectively. Ma et al. (2022) designed two separate prediction models for NER and RE. For NER, they sequentially assign a label to each word using features at the current and previous timesteps. For RE, they concatenate word embeddings with their corresponding entity label embeddings as relation embeddings. The relation scores of each word pair are computed based on a matrix multiplication of the linearly transformed relation embeddings.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "(Ma et al., 2022",
"ref_id": "BIBREF14"
},
{
"start": 417,
"end": 441,
"text": "(Ratinov and Roth, 2009)",
"ref_id": "BIBREF18"
},
{
"start": 893,
"end": 909,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 138,
"end": 144,
"text": "Figure",
"ref_id": null
},
{
"start": 667,
"end": 675,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "TablERT",
"sec_num": "2.2"
},
{
"text": "Despite its simplicity, TablERT has shown promising performance. However, the system predicts the relation labels simultaneously, discarding label dependencies between the table cells. It has been reported that the performance of TablERT has shown little improvement, even when the offdiagonal cells are decoded individually following a predefined order Ma et al. (2022) . In this study, we are interested in the effect of incorporating label dependencies at the top of contextualized representations. In contrast to Ma et al. 2022, we address this problem using 2D-CNNs.",
"cite_spans": [
{
"start": 354,
"end": 370,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TablERT",
"sec_num": "2.2"
},
{
"text": "The goal of NER and RE systems is to extract entities and relations between pairs of entities, based on word sequences. Specifically, we consider a sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "w 1 , \u2022 \u2022 \u2022 , w N . NER aims to identify every word span s i = w b , \u2022 \u2022 \u2022 , w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "e that forms an entity with entity type t i \u2208 E. By contrast, RE aims to extract every relation triple (s 0 \u27e8t 0 \u27e9, r, s 1 \u27e8t 1 \u27e9), where r \u2208 R represents the relation type between s 0 and s 1 . Here, E and R represent the label sets of entities and relations, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "3.1"
},
{
"text": "We propose TablERT-CNN as an extension of TablERT (Ma et al., 2022) , considering the dependencies among labels by applying 2D-CNNs. Fig-ure 1 shows an overview of TablERT-CNN under a setting in which the prediction model contains only one CNN layer. Based on existing studies (Miwa and Sasaki, 2014; Gupta et al., 2016; Zhang et al., 2017; Ma et al., 2022) , we use the upper triangular part of a table to represent the entity and relation labels. The table representation is formally defined as follows: Table Representation We define a matrix Y \u2208 R N \u00d7N and use the upper triangular part to represent the label space of NER and RE. A diagonal entry Y i,i represents the entity label of word w i , and an off-diagonal entry Y i,j (j > i) represents the relation label of the word pair (w i , w j ). We adopt the labeling rules of NER and RE, as in Ma et al. (2022) ; i.e., we annotate an entity using the BILOU notation and annotate a relation to every composing word of an entity span, with the direction hard-encoded into the label.",
"cite_spans": [
{
"start": 50,
"end": 67,
"text": "(Ma et al., 2022)",
"ref_id": "BIBREF14"
},
{
"start": 277,
"end": 300,
"text": "(Miwa and Sasaki, 2014;",
"ref_id": "BIBREF16"
},
{
"start": 301,
"end": 320,
"text": "Gupta et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 321,
"end": 340,
"text": "Zhang et al., 2017;",
"ref_id": "BIBREF28"
},
{
"start": 341,
"end": 357,
"text": "Ma et al., 2022)",
"ref_id": "BIBREF14"
},
{
"start": 850,
"end": 866,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Fig-ure",
"ref_id": null
},
{
"start": 506,
"end": 526,
"text": "Table Representation",
"ref_id": null
}
],
"eq_spans": [],
"section": "TablERT-CNN",
"sec_num": "3.2"
},
{
"text": "We obtain word embeddings from contextualized representations generated from the pretrained BERT model (Devlin et al., 2019) . Based on the existing study, we compute the embedding of each word by max-pooling its composing sub-words (Liu et al., 2019; Eberts and Ulges, 2020; Ma et al., 2022) . Specifically, for word w i composed of subwords start(i), \u2022 \u2022 \u2022 , end(i), the embedding of e i is computed as follows:",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 233,
"end": 251,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 252,
"end": 275,
"text": "Eberts and Ulges, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 276,
"end": 292,
"text": "Ma et al., 2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i := max(x (l) start(i) , \u2022 \u2022 \u2022 , x (l) end(i) ).",
"eq_num": "(1)"
}
],
"section": "Word Embeddings",
"sec_num": null
},
{
"text": "Here, x (l) \u2208 R d emb is the output of the pretrained BERT model, where l is the layer index 1 , d emb is the dimension size, and max(\u2022) is the maxpooling function. Therefore, we obtain e i \u2208 R d emb .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embeddings",
"sec_num": null
},
{
"text": "We adopt a 2D-CNN to capture the local dependencies among neighboring cells. 2D-CNNs are widely used for extracting image-classification and object-detection features (Krizhevsky et al., 2012) . To apply a 2D-CNN to jointly extract entities and their relations, we treat the 2D table as an image and each cell within the table as a pixel. We then employ the 2D-CNN to encode the representation of each cell, as shown in Figure 1 . The convolution network enables the model to capture local dependencies, and for each cell, a 2D-CNN layer yields a weighted linear combination among all surrounding cells within the convolutional window. The dependency range can be extended by stacking multiple 2D-CNN layers.",
"cite_spans": [
{
"start": 167,
"end": 192,
"text": "(Krizhevsky et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "Specifically, for each word pair (w i , w j ), we concatenate the word embeddings e i , e j , and construct the bottom layer H (0) \u2208 R N \u00d7N \u00d72d emb (i.e., layer 0) of the 2D-CNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "H (0) i,j,: = h (0) i,j := [e i ; e j ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "( 2)Here, [\u2022; \u2022] represents the concatenation of two vectors. Hence, the representation of each cell is a vector of dimension 2 \u00d7 d emb , which is denoted as d 0 . Similarly, we denote the dimension number of the vector representation for each cell in layer l as d l .",
"cite_spans": [
{
"start": 10,
"end": 16,
"text": "[\u2022; \u2022]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "We then compute the output of the first 2D-CNN layer H (1) based on the output of the bottom layer H (0) . Analogously, the output of any layer l can be computed by applying convolutions to the output of the previous layer, l \u2212 1. For any word pair (w i , w j ), we obtain its corresponding output at the lth layer H",
"cite_spans": [
{
"start": 101,
"end": 104,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(l) i,j,: = h (l) i,j \u2208 R d l as follows: H (l) i,j,: = h (l) i,j := b (l) + d l\u22121 c=0 (K (l) c,:,: * H (l\u22121) :,:,c ) i,j ,",
"eq_num": "(3)"
}
],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "H (l\u22121) \u2208 R N \u00d7N \u00d7d l\u22121 is the output of layer l \u2212 1, K (l) \u2208 R d l \u00d7d h \u00d7dw is a convolution kernel with window size d h \u00d7 d w , and b (l) \u2208 R (l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "is the bias. Thus, for any dimension (i.e., channel) c, we have K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "(l) c,:,: \u2208 R d h \u00d7dw and H (l\u22121) :,:,c \u2208 R N \u00d7N . A * B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "represents the operation of computing 2D correlations. Given that A \u2208 R (2\u03b1+1)\u00d7(2\u03b2+1) , the computation is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "(A * B) m,n := \u03b1 h=\u2212\u03b1 \u03b2 w=\u2212\u03b2 A \u03b1+h,\u03b2+w B m+h,n+w . (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "The last layer of the 2D-CNN is a convolutional classifier for RE. That is, for the last layer L, we set its output dimension number to be the same as the number of relation labels; i.e., d L := |R|. Thus, for each word pair (w i , w j ) where i \u0338 = j, we obtain the relation label distribution P \u03b8 (\u0176 i,j ) by applying a softmax function to H (L) i,j,: :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P \u03b8 (\u0176 i,j ) := softmax(H (L) i,j,: ),",
"eq_num": "(5)"
}
],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "where P is the estimated probability function, and \u03b8 represents the model parameters. For NER, we linearly transform the representations of the diagonal cells at layer L to compute the entity label distribution of each word w i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "P \u03b8 (\u0176 i,i ) := softmax(W \u2022 H (L) i,i,: + b), (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "where W \u2208 R |E|\u00d7|R| and b \u2208 R |E| are the trainable weight matrix and the bias vector, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "Training and Prediction During training, we use the sum of cross-entropy losses of NER and RE as the objective function. Given the ground-truth label matrix of table Y \u2208 R N \u00d7N , we compute the cross-entropy loss for NER (L NER ) and RE (L RE ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L NER = \u2212 1\u2264i\u2264N log P \u03b8 (\u0176 i,i = Y i,i ), (7) L RE = \u2212 1\u2264i\u2264N i<j\u2264N log P \u03b8 (\u0176 i,j = Y i,j ).",
"eq_num": "(8)"
}
],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "We minimize L NER + L RE to update the model parameters \u03b8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "To predict the entity label of each word w i , we select the label yielding the highest probability from P \u03b8 (\u0176 i,i ) as the predicted result. When a conflict occurs with regard to the entity type within an entity span, we select the entity type labeled to the last word as the final prediction. To predict the relation label for each entity pair (s i , s j ), we select the last words of both entity spans to represent the corresponding span s i , s j . For example, supposing the last word of entity span s i , s j is indexed as end(i), end(j), the predicted relation label for entity pair (s i , s j ) is determined as the label yielding the highest probability from P \u03b8 (\u0176 end(i),end(j) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction Model",
"sec_num": null
},
{
"text": "We evaluated the performance of our proposed system on CoNLL04 (Roth and Yih, 2004) , ACE05 (Walker et al., 2006) , and ADE (Gurulingappa et al., 2012), the statistics of which are listed in Table 1 . Based on the conventional evaluation scheme for CoNLL04 and ACE05, we measured the micro F1-scores, and for ADE, we measured the macro F1-scores.",
"cite_spans": [
{
"start": 63,
"end": 83,
"text": "(Roth and Yih, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 92,
"end": 113,
"text": "(Walker et al., 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "CoNLL04 is an annotated corpus collected from newswires. We processed the data released by Eberts and Ulges (2020) 2 to obtain the BILOU notations of the entities. Thus, our data split is the same as that in Eberts and Ulges (2020) . ACE05 is an annotated corpus collected from various sources, including newswires and online forums. We used the data preprocessing scripts provided by 3 and , which inherits that of Miwa and Bansal (2016) 4 . After preprocessing, an entity is regarded as correct if its label and head region are identical to the ground truth.",
"cite_spans": [
{
"start": 208,
"end": 231,
"text": "Eberts and Ulges (2020)",
"ref_id": "BIBREF1"
},
{
"start": 385,
"end": 386,
"text": "3",
"ref_id": null
},
{
"start": 416,
"end": 438,
"text": "Miwa and Bansal (2016)",
"ref_id": "BIBREF15"
},
{
"start": 439,
"end": 440,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Adverse Drug Effect (ADE, Gurulingappa et al., 2012) is a corpus constructed based on the medical reports of drug usages and their adverse effects. Based on existing studies (Eberts and Ulges, 2020; Wang and Lu, 2020), we removed overlapping entities from the dataset, which comprises only 2.8% of the total number of entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We implemented the proposed system using Py-Torch (Li et al., 2020) and applied the pretrained BERT model provided by the Huggingface libraries (Wolf et al., 2020) . Except for those within the pretrained BERT model, the parameters were randomly initialized. During training, we adopted the AdamW algorithm (Loshchilov and Hutter, 2019) for parameter updates. The details of hyperparameters are listed in Appendix A.",
"cite_spans": [
{
"start": 50,
"end": 67,
"text": "(Li et al., 2020)",
"ref_id": null
},
{
"start": 144,
"end": 163,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 307,
"end": 336,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "All experiments were conducted on a single GPU of an NVIDIA Tesla V100 (16 GiB). Throughout this study, we report the average values of 5 runs with different random seeds for all evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "The main results of the proposed method are presented in Table 2 . We adopted TablERT (Ma et al., 2022) for the primary comparison and trained the system from scratch with different pretrained en- Table 2 : Comparison between the existing and the proposed method (TablERT-CNN). Here, \u25b3 and \u25b2 denote the use of micro-and macro-average F1 scores for evaluation, respectively. The results of TablERT are our replications, and the results of the others are reported values from the original papers. To ensure a fair comparison, the reported values of PURE follow the single-sentence setting.",
"cite_spans": [
{
"start": 86,
"end": 103,
"text": "(Ma et al., 2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 2",
"ref_id": null
},
{
"start": 197,
"end": 204,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "coders 5 . We evaluated the RE performance based on two criteria: RE and RE+. Specifically, REregards each predicted relation triple as correct if the relation label and spans of both entities are identical to the ground truth, whereas RE+ requires the labels of both entities to be correct. Because comparing systems using different encoders is unfair, we discuss the condition in which the encoders are aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "With regard to the CoNLL04 and ADE datasets, we observed that TablERT-CNN achieved high and stable performance on all datasets, on par with that of TablERT. In particular, for CoNLL04, the performance of the proposed method surpassed TablERT for both NER and RE. One possible explanation for this performance gain is that CoNLL04 is a relatively small dataset, as listed in Table 1 . Such a lowresource setting possibly brought out the advantage of TablERT-CNN, as the CNN layers helped to utilize rich information about dependencies among entities and relations.",
"cite_spans": [],
"ref_spans": [
{
"start": 374,
"end": 381,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "However, regarding the ACE05 dataset, we did not observe any performance gain by stacking the CNN layers. As listed in Table 2 , TablERT-CNN lagged its competitor TablERT for around 1.0 point on the F1 score of RE. The reason for this can be multifactorial, and the nature of the ACE05 dataset might provide an answer. The dataset contains entities that do not contribute to any relation triple, which significantly confuses the model during the RE.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 126,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Main Results",
"sec_num": "4.3"
},
{
"text": "Although our system exhibited a good performance based on multiple datasets, no significant improvement was observed against TablERT (Ma et al., 2022) . We hypothesize that the reason for this is the parameter updates within the BERT encoder during fine-tuning, which overshadowed the ability of the CNNs in the prediction model. Selfattention modules within BERT potentially learn to encode the dependencies between word pairs during fine-tuning, overlapping with those captured by the CNNs.",
"cite_spans": [
{
"start": 133,
"end": 150,
"text": "(Ma et al., 2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Experiments were conducted to verify this hypothesis. Specifically, we trained multiple BERTbased NER and RE systems (i.e., systems using a pretrained BERT as the encoder) while freezing the BERT parameters ( \u00a7 5.1). In this manner, we prevented the encoder from obtaining task-specific features during fine-tuning. The performance of these encoder-frozen systems was compared with that of their counterparts, whose encoder parameters were updated during fine-tuning. Based on this comparison, we investigated the extent to which the parameter updates within BERT contribute to Method Parameter Encoder Layer Updates 0 1 2 4 6 8 10 12 SpERT (Eberts and Ulges, 2020) No 27.4 30.9 32.1 36.5 41.0 40.6 37.2 8.0 Yes 51.0 70.7 79.9 85.4 85.9 86.9 86.5 87.7",
"cite_spans": [
{
"start": 653,
"end": 677,
"text": "(Eberts and Ulges, 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 578,
"end": 646,
"text": "Method Parameter Encoder Layer Updates 0 1 2 4 6 8 10 12",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "TablERT (Ma et al., 2022 Table 4 : Micro-average F1 scores of the RE on the CoNLL04 development set with/without parameter updates within the encoder (BERT BASE ) during fine-tuning. We fed the hidden states at different encoder layers into the prediction model for task-specific predictions.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "(Ma et al., 2022",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "the performance of NER and RE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "In addition, we are interested in how each BERT layer encodes dependencies that are helpful for NER and RE. Previous studies have utilized the outputs of the top BERT layers to produce word representations Eberts and Ulges, 2020; Ma et al., 2022; Wang and Lu, 2020) . However, we are curious whether the bottom or middle BERT layers also store useful information for solving the NER and RE. Therefore, we fed hidden states at the {0, 1, 2, 4, 6, 8, 10, 12}th BERT layer into the prediction model and examined the difference in performance ( \u00a7 5.2). Here, the 0th layer denotes the embedding layer of the BERT encoder.",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "Eberts and Ulges, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 230,
"end": 246,
"text": "Ma et al., 2022;",
"ref_id": "BIBREF14"
},
{
"start": 247,
"end": 265,
"text": "Wang and Lu, 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Our analysis includes SpERT (Eberts and Ulges, 2020), TablERT (Ma et al., 2022) and the proposed method. We included TablERT for comparison because it is a counterpart of our system, incorporating no dependencies while performing RE. We included SpERT for comparison because it is a strong baseline utilizing a pretrained BERT encoder. Systems were trained on the CoNLL04 (Roth and Yih, 2004) training set and evaluated on the development set, using BERT BASE (Devlin et al., 2019) as the encoder. The experimental results are listed in Tables 3 and 4 . The plots corresponding to these results are presented in Appendix B. Finally, we analyze the effect of 2D-CNNs ( \u00a7 5.3).",
"cite_spans": [
{
"start": 62,
"end": 79,
"text": "(Ma et al., 2022)",
"ref_id": "BIBREF14"
},
{
"start": 372,
"end": 392,
"text": "(Roth and Yih, 2004)",
"ref_id": "BIBREF21"
},
{
"start": 460,
"end": 481,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 537,
"end": 551,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "As listed in Tables 3 and 4 , while freezing the parameters within BERT, we observed a decrease in the performance of both NER and RE for all target systems. SpERT exhibits a drastic decrease in performance while disabling the parameter updates within the encoder. This observation suggests that the system relies heavily on parameter updates of the encoder during task-specific fine-tuning to solve specific tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 27,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effect of Parameter Updates within BERT",
"sec_num": "5.1"
},
{
"text": "By contrast, TablERT-CNN exhibited the best performance among the target systems, even with BERT parameters frozen. This result indicates that in a situation in which the parameter updates within the encoder are infeasible (e.g., computational resources are limited), TablERT-CNN can be more promising than TablERT or SpERT in terms of achieving high performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Parameter Updates within BERT",
"sec_num": "5.1"
},
{
"text": "Furthermore, while freezing the BERT parameters, utilizing the hidden states of the top layers (i.e., layer 10 and higher) hindered the performance of all target systems. This phenomenon corresponds to the study by Rogers et al. (2020) , which concluded that the final layers of BERT are usually the most task-specific. For a pretrained BERT encoder without any parameter updates, the top layers of the model are specified to the pretraining task, i.e., the masked-language modeling (MLM) task. It can therefore be assumed that while using the hidden states of the top layers of BERT without any taskspecific parameter updates, the specificity toward the MLM task adversely affects the performance of the prediction model for both NER and RE.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "Rogers et al. (2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Parameter Updates within BERT",
"sec_num": "5.1"
},
{
"text": "To visualize the performance change caused by the choice of BERT layer, the hidden states of which were utilized as word embeddings, we plotted the micro-F1 scores of all target systems, as shown in Figure 2 . Incorporating outputs from deeper BERT layers generally improves the prediction of all target systems. The improvement was significant at the bottom layers, but subtle at the top. Specifically, as shown in Figure 2 , from layers 0 to 6, we observed a significant boost in the performance of NER and RE for all target systems. The change in performance was more evident with RE than with NER. By contrast, the performance of all target systems remained flat, starting from layer 8. This tendency suggests that, while building a BERT-based NER and RE system, it may be sufficient to employ up to 8 layers for text encoding.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 416,
"end": 424,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of BERT Layer",
"sec_num": "5.2"
},
{
"text": "Our findings match those reported by Jawahar et al. (2019) , suggesting that BERT encodes a hierarchy of linguistics from bottom to top. Jawahar et al. (2019) found that BERT learns to encode longdistance dependencies, e.g., subject-verb agreements at deeper layers, which possibly explains the significant improvement in the RE performance while using outputs of the deeper BERT layers.",
"cite_spans": [
{
"start": 37,
"end": 58,
"text": "Jawahar et al. (2019)",
"ref_id": "BIBREF5"
},
{
"start": 137,
"end": 158,
"text": "Jawahar et al. (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of BERT Layer",
"sec_num": "5.2"
},
{
"text": "As shown in Figure 2 , while employing the outputs from the bottom BERT layers (i.e., from layers 0 to 4), TablERT-CNN outperformed the other systems by a relatively large margin. We owe the performance gap to the ability of TablERT-CNN to capture local dependencies. As noted in an existing study, the bottom BERT layers encode the surface information, for example, the phrasal syntax and word order (Jawahar et al., 2019; Rogers et al., 2020) . As a result, the outputs at the bottom BERT layers lack contextualized information incorporating long-range dependencies, which are crucial for extracting relations. Therefore, whereas SpERT and TablERT suffer from the absence of word-word interactions, TablERT-CNN overcomes this issue by encoding them in the prediction model. By observing the table representation as a 2D image and each cell as a pixel, our method captures the local dependencies within each convolution kernel using 2D-CNNs. This advantage is apparent when word embeddings are not properly contextualized. However, the superiority of TablERT-CNN becomes inconspicuous when the depth of the BERT layers increases. This phenomenon indicates that, when the contextualization ability of the encoder improves, the strength of a 2D-CNN to incorporate dependencies diminishes because the encoder has already captured the necessary information.",
"cite_spans": [
{
"start": 401,
"end": 423,
"text": "(Jawahar et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 424,
"end": 444,
"text": "Rogers et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Effect of 2D-CNNs",
"sec_num": "5.3"
},
{
"text": "Notably, although we have shown the superiority of TablERT-CNN when utilizing the bottom BERT layers, it is natural to suspect that the performance gain resulted from the additional parameters introduced by the convolutional layers. Compared with SpERT and TablERT, TalERT-CNN introduces more trainable parameters, thereby increasing the ability of the system to fit the training data. To determine whether the performance gain resulted from the ability of the CNN to capture local dependencies or merely from an increased number of parameters, we replotted Figure 2 , as shown in Figure 3 , the result of which shows the relationship between the RE micro-F1 scores and the number of trainable parameters of each target system. From Figure 3 , we observed that TablERT-CNN lies on the left-most side among all of the target systems. To paraphrase, when keeping the number of trainable parameters the same, TablERT-CNN performs better than its competitors. This tendency is apparent when the number of trainable parameters is small, which indicates that TablERT-CNN can be a prospective option when the computational resources are limited.",
"cite_spans": [],
"ref_spans": [
{
"start": 558,
"end": 566,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 581,
"end": 589,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 733,
"end": 741,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effect of 2D-CNNs",
"sec_num": "5.3"
},
{
"text": "To conclude, TablERT-CNN can be a promising architecture when parameter updates within the encoder are infeasible or when the encoder is not well-contextualized. Under these situations, a 2D-CNN plays an important role in encoding the local dependencies, thus improving the NER and RE predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of 2D-CNNs",
"sec_num": "5.3"
},
{
"text": "We presented TablERT-CNN, a novel method for jointly extracting entities and relations with 2D-CNNs. The method casts NER and RE as tablelabeling problems, representing each table cell as a pixel and each table as a 2D image. By applying 2D-CNNs, the method predicts the label of each table cell to extract entities and relations. Experiments conducted on CoNLL04, ACE05, and ADE demonstrated that TablERT-CNN performed on par with current state-of-art systems when the pretrained encoders were aligned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "To explore why TablERT-CNN did not outperform existing systems by a significant margin, we conducted experiments to compare their performance with and without parameter updates of the BERT encoder during the fine-tuning. We observed that TablERT-CNN performed reasonably well even without updating the encoder parameters, whereas its competitors suffered a decrease in performance. These results indicate that the BERT encoder can capture task-specific dependencies among tokens and labels within its architecture, based on parameter updates during fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we plan to model the dependencies among table cells using other neural architectures. Prospective directions include 2D-transformers that compute the attention across element pairs in a 2D array, or Routing Transformers that utilize local attentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Figures 4(a) and 4(b) correspond to Tables 3 and 4 , respectively. As we can see, TablERT-CNN exhibited a relatively high performance, even when the BERT parameters were frozen. In addition, when the BERT parameters were frozen, the performance of all target systems decreased while incorporating the hidden states of the top (10-12) encoder layers. Micro-F1",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 50,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "TablERT-CNN (updated) TablERT-CNN (frozen) TablERT (updated) TablERT (frozen) SpERT (updated) SpERT (frozen) (a) NER micro-F1 scores. Here, \"updated\" and \"frozen\" indicate the status of each parameter within BERT during the fine-tuning process.",
"cite_spans": [
{
"start": 51,
"end": 60,
"text": "(updated)",
"ref_id": null
},
{
"start": 84,
"end": 93,
"text": "(updated)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT Layer",
"sec_num": null
},
{
"text": "Previous studies have adopted the top layer(Li et al., 2019;Eberts and Ulges, 2020) or the average of the top three layers(Wang and Lu, 2020) to generate word representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/lavis-nlp/spert 3 https://github.com/dwadden/dygiepp 4 https://github.com/tticoin/LSTM-ER",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The code is available at https://github.com/ YoumiMa/TablERT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "B Effect of Parameter Updates with BERT (cont.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO). We appreciate the insightful comments and suggestions of the anonymous reviewers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "The values of the hyperparameters used during the experiments are listed in Table 5 . CNN configurations were determined by conducting grid searches on the development split of each dataset, whereas the training configurations were adopted directly from Ma et al. (2022) . We applied a scheduler that linearly increases the learning rate from 0 to the maximum value during the warm-up period and gradually decreases it afterward.",
"cite_spans": [
{
"start": 254,
"end": 270,
"text": "Ma et al. (2022)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Hyper-parameters",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Span-based joint entity and relation extraction with transformer pre-training",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Eberts",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Ulges",
"suffix": ""
}
],
"year": 2020,
"venue": "24th European Conference on Artificial Intelligence (ECAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Eberts and Adrian Ulges. 2020. Span-based joint entity and relation extraction with transformer pre-training. In 24th European Conference on Artifi- cial Intelligence (ECAI).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Table filling multi-task recurrent neural network for joint entity and relation extraction",
"authors": [
{
"first": "Pankaj",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Andrassy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2537--2547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pankaj Gupta, Hinrich Sch\u00fctze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural net- work for joint entity and relation extraction. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547, Osaka, Japan. The COL- ING 2016 Organizing Committee.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development of a benchmark corpus to support the automatic extraction of drugrelated adverse effects from medical case reports",
"authors": [
{
"first": "Harsha",
"middle": [],
"last": "Gurulingappa",
"suffix": ""
},
{
"first": "Abdul",
"middle": [
"Mateen"
],
"last": "Rajput",
"suffix": ""
},
{
"first": "Angus",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Juliane",
"middle": [],
"last": "Fluck",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hofmann-Apitius",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Toldo",
"suffix": ""
}
],
"year": 2012,
"venue": "Text Mining and Natural Language Processing in Pharmacogenomics",
"volume": "45",
"issue": "",
"pages": "885--892",
"other_ids": {
"DOI": [
"10.1016/j.jbi.2012.04.008"
]
},
"num": null,
"urls": [],
"raw_text": "Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug- related adverse effects from medical case reports. Journal of Biomedical Informatics, 45(5):885-892. Text Mining and Natural Language Processing in Pharmacogenomics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "REBEL: Relation extraction by end-to-end language generation",
"authors": [
{
"first": "Pere-Llu\u00eds Huguet",
"middle": [],
"last": "Cabot",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2021",
"volume": "",
"issue": "",
"pages": "2370--2381",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-emnlp.204"
]
},
"num": null,
"urls": [],
"raw_text": "Pere-Llu\u00eds Huguet Cabot and Roberto Navigli. 2021. REBEL: Relation extraction by end-to-end language generation. In Findings of the Association for Com- putational Linguistics: EMNLP 2021, pages 2370- 2381, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "What does BERT learn about the structure of language",
"authors": [
{
"first": "Ganesh",
"middle": [],
"last": "Jawahar",
"suffix": ""
},
{
"first": "Beno\u00eet",
"middle": [],
"last": "Sagot",
"suffix": ""
},
{
"first": "Djam\u00e9",
"middle": [],
"last": "Seddah",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3651--3657",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1356"
]
},
"num": null,
"urls": [],
"raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Imagenet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Advances in Neural Information Processing Systems",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep con- volutional neural networks. In Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning of language representations. In International Confer- ence on Learning Representations (ICLR).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Pritam Damania, and Soumith Chintala. 2020. Pytorch distributed: Experiences on accelerating data parallel training",
"authors": [
{
"first": "Shen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yanli",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Rohan",
"middle": [],
"last": "Varma",
"suffix": ""
},
{
"first": "Omkar",
"middle": [],
"last": "Salpekar",
"suffix": ""
},
{
"first": "Pieter",
"middle": [],
"last": "Noordhuis",
"suffix": ""
},
{
"first": "Teng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Vaughan",
"suffix": ""
}
],
"year": null,
"venue": "Proc. VLDB Endow",
"volume": "13",
"issue": "12",
"pages": "3005--3018",
"other_ids": {
"DOI": [
"10.14778/3415478.3415530"
]
},
"num": null,
"urls": [],
"raw_text": "Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chin- tala. 2020. Pytorch distributed: Experiences on ac- celerating data parallel training. Proc. VLDB Endow., 13(12):3005-3018.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Entityrelation extraction as multi-turn question answering",
"authors": [
{
"first": "Xiaoya",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Fan",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Zijun",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xiayu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Duo",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Mingxin",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1340--1350",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1129"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity- relation extraction as multi-turn question answering. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 1340- 1350, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A joint neural model for information extraction with global features",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lingfei",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7999--8009",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.713"
]
},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7999-8009, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "GCDT: A global context enhanced deep transition architecture for sequence labeling",
"authors": [
{
"first": "Yijin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2431--2441",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1233"
]
},
"num": null,
"urls": [],
"raw_text": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for se- quence labeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2431-2441, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations (ICLR).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A general framework for information extraction using dynamic span graphs",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3036--3046",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1308"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 3036-3046, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Named entity recognition and relation extraction using enhanced table filling by contextualized representations",
"authors": [
{
"first": "Youmi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Hiraoka",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
}
],
"year": 2022,
"venue": "Journal of Natural Language Processing",
"volume": "29",
"issue": "1",
"pages": "187--223",
"other_ids": {
"DOI": [
"10.5715/jnlp.29.187"
]
},
"num": null,
"urls": [],
"raw_text": "Youmi Ma, Tatsuya Hiraoka, and Naoaki Okazaki. 2022. Named entity recognition and relation extraction us- ing enhanced table filling by contextualized repre- sentations. Journal of Natural Language Processing, 29(1):187-223.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "End-to-end relation extraction using LSTMs on sequences and tree structures",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1105--1116",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1105"
]
},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1105-1116, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Modeling joint entity and relation extraction with table representation",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Miwa",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Sasaki",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1858--1869",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1200"
]
},
"num": null,
"urls": [],
"raw_text": "Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table represen- tation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869, Doha, Qatar. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cicero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation between augmented natural languages",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Paolini",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Krone",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Achille",
"suffix": ""
},
{
"first": "Rishita",
"middle": [],
"last": "Anubhai",
"suffix": ""
}
],
"year": null,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giovanni Paolini, Ben Athiwaratkun, Jason Krone, Jie Ma, Alessandro Achille, RISHITA ANUBHAI, Ci- cero Nogueira dos Santos, Bing Xiang, and Stefano Soatto. 2021. Structured prediction as translation be- tween augmented natural languages. In International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Design challenges and misconceptions in named entity recognition",
"authors": [
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)",
"volume": "",
"issue": "",
"pages": "147--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Compu- tational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "HySPA: Hybrid span generation for scalable text-to-graph extraction",
"authors": [
{
"first": "Liliang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Chenkai",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Heng",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "4066--4078",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.356"
]
},
"num": null,
"urls": [],
"raw_text": "Liliang Ren, Chenkai Sun, Heng Ji, and Julia Hock- enmaier. 2021. HySPA: Hybrid span generation for scalable text-to-graph extraction. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4066-4078, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00349"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A linear programming formulation for global inference in natural language tasks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2004. A linear program- ming formulation for global inference in natural lan- guage tasks. In Proceedings of the Eighth Confer- ence on Computational Natural Language Learn- ing (CoNLL-2004) at HLT-NAACL 2004, pages 1-8, Boston, Massachusetts, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems (NeurIPS), pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Entity, relation, and event extraction with contextualized span representations",
"authors": [
{
"first": "David",
"middle": [],
"last": "Wadden",
"suffix": ""
},
{
"first": "Ulme",
"middle": [],
"last": "Wennberg",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5784--5789",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1585"
]
},
"num": null,
"urls": [],
"raw_text": "David Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5784- 5789, Hong Kong, China. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Ace 2005 multilingual training corpus. Philadelphia: Linguistic Data Consortium",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Strassel",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Medero",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Philadelphia: Linguistic Data Con- sortium.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Two are better than one: Joint entity and relation extraction with tablesequence encoders",
"authors": [
{
"first": "Jue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1706--1721",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.133"
]
},
"num": null,
"urls": [],
"raw_text": "Jue Wang and Wei Lu. 2020. Two are better than one: Joint entity and relation extraction with table- sequence encoders. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1706-1721, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Drame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-demos.6"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A partition filter network for joint entity and relation extraction",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinlan",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhongyu",
"middle": [],
"last": "Wei",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "185--197",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.17"
]
},
"num": null,
"urls": [],
"raw_text": "Zhiheng Yan, Chong Zhang, Jinlan Fu, Qi Zhang, and Zhongyu Wei. 2021. A partition filter network for joint entity and relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 185-197, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "End-to-end neural relation extraction with global optimization",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1730--1740",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1182"
]
},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global op- timization. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1730-1740, Copenhagen, Denmark. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A frustratingly easy approach for entity and relation extraction",
"authors": [
{
"first": "Zexuan",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "50--61",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.5"
]
},
"num": null,
"urls": [],
"raw_text": "Zexuan Zhong and Danqi Chen. 2021. A frustratingly easy approach for entity and relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 50-61, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Micro-F1 scores of all target systems while varying the encoder layer whose outputs were fed into the prediction model (CoNLL04 development set).",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Performance of all target systems while varying the number of trainable parameters, as measured using the RE micro-F1 score (CoNLL04 development set).",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "Micro-F1 scores of all target systems while varying the encoder layer whose outputs were fed into the prediction model (CoNLL04 development set).",
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": "Statistics of each dataset used in this study."
},
"TABREF5": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Method SpERT (Eberts and Ulges, 2020) TablERT (Ma et al., 2022) TablERT-CNN (Ours)</td><td>Parameter Updates No Yes No Yes No Yes</td><td>Encoder Layer 4 6 4.6 7.8 16.4 35.4 49.6 64.7 67.2 69.3 70.2 69.1 0 1 2 8 10 12 3.0 3.3 3.7 6.0 5.8 0.0 28.8 37.4 39.3 47.1 53.0 54.0 55.9 51.7 36.0 47.9 60.9 66.5 71.3 70.5 71.0 70.7 53.5 54.8 57.6 64.4 66.2 67.1 64.4 61.5 54.0 59.9 62.3 67.8 70.6 70.3 70.1 70.6</td></tr></table>",
"num": null,
"text": "Micro-average NER F1 scores on the CoNLL04 development set with/without parameter updates within the encoder (BERT BASE ) during fine-tuning. We fed the hidden states at different encoder layers into the prediction model for task-specific predictions."
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table/>",
"num": null,
"text": ""
}
}
}
}