ACL-OCL / Base_JSON /prefixC /json /crac /2020.crac-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:16.724801Z"
},
"title": "E.T.: Entity-Transformers Coreference augmented Neural Language Model for richer mention representations via Entity-Transformer blocks",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Stylianou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aristotle University of Thessaloniki",
"location": {
"country": "Greece"
}
},
"email": ""
},
{
"first": "Ioannis",
"middle": [],
"last": "Vlahavas",
"suffix": "",
"affiliation": {
"laboratory": "Aristotle University of Thessaloniki School of Informatics Greece",
"institution": "",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the last decade, the field of Neural Language Modelling has witnessed enormous changes, with the development of novel models through the use of Transformer architectures. However, even these models struggle to model long sequences due to memory constraints and increasing computational complexity. Coreference annotations over the training data can provide context far beyond the modelling limitations of such language models. In this paper we present an extension over the Transformer-block architecture used in neural language models, specifically in GPT2, in order to incorporate entity annotations during training. Our model, GPT2E, extends the Transformer layers architecture of GPT2 to Entity-Transformers, an architecture designed to handle coreference information when present. To that end, we achieve richer representations for entity mentions, with insignificant training cost. We show the comparative model performance between GPT2 and GPT2E in terms of Perplexity on the CoNLL 2012 and LAMBADA datasets as well as the key differences in the entity representations and their effects in downstream tasks such as Named Entity Recognition. Furthermore, our approach can be adopted by the majority of Transformer-based language models.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In the last decade, the field of Neural Language Modelling has witnessed enormous changes, with the development of novel models through the use of Transformer architectures. However, even these models struggle to model long sequences due to memory constraints and increasing computational complexity. Coreference annotations over the training data can provide context far beyond the modelling limitations of such language models. In this paper we present an extension over the Transformer-block architecture used in neural language models, specifically in GPT2, in order to incorporate entity annotations during training. Our model, GPT2E, extends the Transformer layers architecture of GPT2 to Entity-Transformers, an architecture designed to handle coreference information when present. To that end, we achieve richer representations for entity mentions, with insignificant training cost. We show the comparative model performance between GPT2 and GPT2E in terms of Perplexity on the CoNLL 2012 and LAMBADA datasets as well as the key differences in the entity representations and their effects in downstream tasks such as Named Entity Recognition. Furthermore, our approach can be adopted by the majority of Transformer-based language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language modelling is the task of transforming individual words into vector representations based on the context they appear in. Hence, distant term dependencies are an inherited issue within the task. Language models always seek for smart approaches towards incorporating context from longer distances as it allows for better representations compared to their limited context counterparts. Intuitively, imagine attempting to start reading a novel series from the second book onward, with no information about the first. The amount of information previously missed is something that cannot be acquired. However, this is the case with most language models. While an understanding of the words is present due to the contextual information at each word's occurrence, entity information that are in distant text are lost or not transferred.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Until recently, Recurrent Neural Networks (RNNs), and specifically Long Short-Term Memory (LSTM) networks, have been the core of all the state-of-the-art approaches (McCann et al., 2017; Peters et al., 2018) . Thanks to the Transformers architecture (Vaswani et al., 2017) , through the use of attention mechanisms, models such as XLNet (Yang et al., 2019) , GPT (Radford et al., 2019) and BERT (Devlin et al., 2019) can account for even longer sequences. However, the computational limitations of the multihead attention in the architecture make it hard to increase the contextual information in such models (Tay et al., 2020) . As a result, research has been focused on introducing variations to the transformer architecture, with focus on the multi-head attention mechanism, in order to alleviate part of the computational cost and increase the contextual information available to models.",
"cite_spans": [
{
"start": 165,
"end": 186,
"text": "(McCann et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 187,
"end": 207,
"text": "Peters et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 250,
"end": 272,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 337,
"end": 356,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 363,
"end": 385,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 395,
"end": 416,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 609,
"end": 627,
"text": "(Tay et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a novel approach, that makes use of coreference information during training a language model via our Entity-Transformer architecture, which extends the original Transformer block in Transformer-Based language models. To that end, we incorporate the important entity information that would otherwise be unreachable for the model. As a result, we effectively boost the representations of the entity mentions, where entity information is present, without hindering the performance of the language model where entities are not present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our experiments, we extend the GPT2 architecture to formulate our model, named GPT2E and train it on the CoNLL-2012 dataset (Pradhan et al., 2012) using the annotated coreference information. We evaluate the model's performance in terms of Perplexity on the ConLL 2012 and the LAMBADA (Paperno et al., 2016) datasets and showcase the effects of such training on the word representations as well as on the downstream task of Named Entity Recognition (NER) using the CoNLL 2012 dataset. To that end, we compare GPT2E's performance to a base model (GPT2) when trained on the same data, to highlight the effects of coreference information when paird with our Entity-Transformer architecture.",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Paperno et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the last decade, the field of Neural Language Modelling has witnessed enormous changes. With pretrained neural language models being the current go-to approach in all NLP reserach, a variety of methods models have been developed. We distinguish two major categories:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "General purpose language models. Steady improvements have been achieved to this field with the use of deep RNNs and pre-training on a large number of training data (McCann et al., 2017; Peters et al., 2018) . With Transformers, language models have been able to capture longer linguistic structures without the use of RNNs and surpass their RNN counterparts by a big margin (Radford et al., 2018; Devlin et al., 2019) . Recent research has focused on ways of taking advantage of more context (Yang et al., 2019; Fan et al., 2020) and introducing effective methodologies to scale up the models and train them (Radford et al., 2019; Shoeybi et al., 2019; Rosset, 2019; Brown et al., 2020) .",
"cite_spans": [
{
"start": 164,
"end": 185,
"text": "(McCann et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 186,
"end": 206,
"text": "Peters et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 374,
"end": 396,
"text": "(Radford et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 397,
"end": 417,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 492,
"end": 511,
"text": "(Yang et al., 2019;",
"ref_id": "BIBREF25"
},
{
"start": 512,
"end": 529,
"text": "Fan et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 608,
"end": 630,
"text": "(Radford et al., 2019;",
"ref_id": "BIBREF16"
},
{
"start": 631,
"end": 652,
"text": "Shoeybi et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 653,
"end": 666,
"text": "Rosset, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 667,
"end": 686,
"text": "Brown et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Language modelling with entity decisions. YangLM (Yang et al., 2017) was the first to incorporate entity decisions to a language model by introducing learnable entity embeddings. Alternative entity handling mechanisms are introduced in both EntityNLM (Ji et al., 2017) and SetLM (Kunz and Hardmeier, 2019) in addition to a length variable for EntityNLM. All of the aforementioned approaches are RNNbased and hence their performance is expected to be sub-par to Transformer based models. Furthermore, (Kunz and Hardmeier, 2019) concludes that language models handling entity decisions do not improve in performance with the addition of more hidden units and that the source data is of limited number and of specific genre which do not highlight the benefits of explicit entity information. Clark et al. (2019) , through attention head probing, experimentally proves that BERT does model anaphoric phenomenon in the form of antecedent selection, with attention heads directly attending to the respective mention's antecedent. However, these information are not explicitly used to further enhance the model. Furthermore, ERNIE (Zhang et al., 2019) , which uses knowledge graphs to infuse entity information to the model, only does so for named entities, completely ignoring pronouns and nominal mentions.",
"cite_spans": [
{
"start": 49,
"end": 68,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 251,
"end": 268,
"text": "(Ji et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 279,
"end": 305,
"text": "(Kunz and Hardmeier, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 500,
"end": 526,
"text": "(Kunz and Hardmeier, 2019)",
"ref_id": "BIBREF8"
},
{
"start": 789,
"end": 808,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF2"
},
{
"start": 1124,
"end": 1144,
"text": "(Zhang et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In order to incorporate coreference information to a language model, we require training and testing data with entity information present and a mechanism to handle existing and non-existing entities. To that end, our proposed model, GPT2E, is based on the GPT2 language model, with changes to the Transformer block and an entity handling mechanism, which are described in the following subsections. As a result, GPT2E is a combination of multi-layer Entity-Transformer decoder blocks. The model applies multiheaded self-attention operations over the input tokens, position-wise feed-forward transformations, and entity-based attention operations. The model architecture can be described as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h 0 = U W e + W p h l = entity transformer block(h l\u22121 , E)\u2200 i \u2208 [1, n] P (u) = softmax(h n W T e )",
"eq_num": "(1)"
}
],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "where U = (u k , . . . , u 1 ) is the context vector of tokens, n is the number of layers, W e is the token embedding matrix, W p is the position embedding matrix and E is the context vector of entity representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our approach",
"sec_num": "3"
},
{
"text": "Entity-Transformer (ET) blocks are extensions of the transformer blocks used in GPT2, designed to handle entities in the form of vectors of shape E i \u2208 R 1\u00d7d embd , where d embd is the embedding dimension the model outputs. Effectively, the entity representations are used directly inside the ET blocks. The input representation first goes through a layer normalization (Ba et al., 2016 ) and a masked multihead self attention layer (Vaswani et al., 2017) , followed by a residual connection (He et al., 2016) . The output of the residual connection is then used in a layer normalization and position-wise feed foward layer followed by another residual connection. The final residual output is used in the entity attention layer before it is forwarded outside of the Entity-Transformer block. The entity attention layer is an adaptation of the masked multi-head self attention layer which considers Entities (E) as the Key (K) value in the Query (Q), Key (K), Value (V) attention mechanism scheme. The architecture of the Entity-Transformer blocks and the entity attention mechanism used are shown in Figure 1 .",
"cite_spans": [
{
"start": 370,
"end": 386,
"text": "(Ba et al., 2016",
"ref_id": "BIBREF0"
},
{
"start": 433,
"end": 455,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 492,
"end": 509,
"text": "(He et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 1101,
"end": 1109,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Entity-Transformer block",
"sec_num": "3.1"
},
{
"text": "We maintain a persistent set of entities E, that holds the hidden representation of the last entity's mention from our model. Each entity representation E i is initialised as a vector of ones, which allows for minimal noise in the first occurrence of the entity. Tokens that are not part of the entity mention have a consistent entity representation E \u2205 , as a vector of ones, similar to unseen entity mentions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity handling mechanism",
"sec_num": "3.2"
},
{
"text": "During each training step, E i takes the latest value of the respective entity's latest hidden representation from E and is updated to the new value at the end of each step. These entity representations are handled with the use of Entity-Transformer blocks. The final hidden representation of the input token, after it is affected by the previous entity representation E i , is considered to be the new entity representations and replaces E i in E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity handling mechanism",
"sec_num": "3.2"
},
{
"text": "Our approach is evaluated in two steps. First we evaluate our GPT2E language model, in comparison with a GPT2 model, trained on CoNLL 2012 and evaluated on both CoNLL 2012 and LAMBADA datasets. We then use the trained models to extract word representations for entity mentions based on the coreference annotations in text and measure the differences of such representations. For NER, we use the language models to extract word representatios and train the same baseline model on the CoNLL 2012 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "In our experiments we use the GPT2-small configuration with 117M parameters, 12 heads and 12 layers for both GPT2 and GPT2E. Both models use a Byte-Pair Encoder to process the input, a learning rate of 2e-5 and train for 10e5 steps, with validation every 10e3 steps. We use a batch size of 1, to highlight the effect of entity updates in the system, as the entity representations are only updated at the end of each training step.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "After training, we compute the differences between the representations of all entity mentions in the coreference clusters as derived from GPT2 and GPT2E. Consequently, we conduct experiments with no contextual information for each word and we also distinguish the results between using and not using entity information. We perform these experiments separately for all entities in the dataset and present the average score for different type of words based on their part-of-speech tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "The NER models are based on the Lample et al. (2016) architecture. However, our models use only word embeddings from the pre-trained GPT2 and GPT2E models respectively, removing the character embeddings to eliminate any information input apart from the coreference-trained representations. We use a hidden size of 512 for the Bidirectional LSTMs, 0.5 dropout (Srivastava et al., 2014) between layers and a learning rate of 0.0001 with 0.9 decay per epoch with Adam (Kingma and Ba, 2014) . We trained our models for 20 epoches, with early stopping and a batch size of 32.",
"cite_spans": [
{
"start": 32,
"end": 52,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF9"
},
{
"start": 359,
"end": 384,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF20"
},
{
"start": 477,
"end": 486,
"text": "Ba, 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "All the experiments were run on a computer with a single Titan V 12GB graphics card, 32GB of memory and an Intel i7-8700 processor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "We chose the English CoNLL-2012 dataset for training, which is based on the OntoNotes 5.0 corpus (Weischedel et al., 2011) and contains over 1.3 million words with 35,143 entity mentions in the training set and 170 thousand words with 4,532 entity mentions in the test set making it the most suitable dataset for training a language model with coreference annotations. In the dataset common nouns, pronouns and proper nouns contribute 90% of the words in both train and test English sets. For our out of domain evaluation we chose the LAMBADA dataset. This choice was based on the premise that the dataset is primarly used for word predictions requiring broad discourse context and that the target words are mostly proper nouns and common nouns (85% fo the total target words). As a result, we expect that the importance of an entity-centric language model would be better displayed in such a scenario.",
"cite_spans": [
{
"start": 97,
"end": 122,
"text": "(Weischedel et al., 2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Preprocessing",
"sec_num": "4.2"
},
{
"text": "As we utilize the CoNLL-2012 dataset for both the Language Modelling task and the NER task, we formulate the data in two different ways. Table 1 . Specifically, for each token we also introduce a second variable \"E\" which indicates the entity in which the token is part of, using the gold coreference annotations, with a special \"\u2205\" for tokens that are not part of an entity. For the CoNLL dataset, we populate E with the golden entities from the coreference resolution shared task. For the LAMBADA dataset we use the \u2205 for all tokens. In comparison to the original data formulation described in Ji et al. 2017, we opted to not use the L variable to denote the entity length (i.e. the number of remaining tokens in the entity mention) as it's main use is enable entity mention prediction, which we do not attempt at this stage. We use Byte Pair Encoding (BPE) (Sennrich et al., 2016) for the final input representation of the word instances, similar to GPT2.",
"cite_spans": [
{
"start": 860,
"end": 883,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Datasets and Preprocessing",
"sec_num": "4.2"
},
{
"text": "For NER, we formulate the data in a IOB format to facilitate a similar model architecture as described in Lample et al. (2016) , using the gold named entities of the dataset, including nested entities.",
"cite_spans": [
{
"start": 106,
"end": 126,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets and Preprocessing",
"sec_num": "4.2"
},
{
"text": "To evaluate the results of our Entity-Transformers architecture and the effects of corereference annotations to language modelling, we measure the change in performance of the language model using Perplexity (PPL). Furthermore, we compute the average difference of the representations between mentions of the same entity of the GPT2E model, between each entity mention between GPT2 and GPT2E and between non-entity mentions of the same words using cosine similarity. Furthermore, we use microaverage Precision, Recall and F1 scores for the evaluation of our NER models. For Language modelling, Table 2 , shows the training and validation losses of GPT2 and GPT2E, as well as the Perplexity of the models after 10e5 training steps. The gradual changes in training and validation losses, measured every 10e3 steps, are illustrated in Figures 2 & 3 with GPT2 model in orange and GPT2E model in blue colours respectively. Similarly, Table 3 highlights the performance difference between the two trained models on the LAMBADA dataset. As both models are trained on a very limited dataset compared to other language models, we are not comparing performance in terms of accuracy. In terms of Perplexity, the models show similar performances on the CoNLL 2012 dataset, while having a slight advantage at the LAMBADA dataset. The slight improvement in Perplexity of the GPT2E model over the GPT2 on the LAMBADA dataset is attributed to the target words' part-of-speech type. As described in Section 4.2, the target words of the LAMBADA dataset are mostly proper nouns and common nouns and the majority of the training mentions in the CoNLL-2012 dataset are of the same type. This behaviour is consistent with the expectations of the performance of an entity-centric language model. Both GPT2 and GPT2E models show a remarkably low Perplexity compared to EntityNLM, YangLM and SetLM of reported Perplexity 161.64, 114 and 107 respectively. However, these language models are RNN based, and gap between them is attributed to the Transformers architecture and the relatively small size of the CoNLL-2012 dataset. The added complexity of calculating the entity representations and using the Entity-Transformer blocks is contributing to 0.008 seconds per step in both training and evaluation, adding up to an additional 12 min and 6 seconds, a 2% increase in time for the complete training process.",
"cite_spans": [],
"ref_spans": [
{
"start": 594,
"end": 601,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 832,
"end": 845,
"text": "Figures 2 & 3",
"ref_id": "FIGREF1"
},
{
"start": 929,
"end": 936,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "To compare the changes in the entity mention representations when using coreference information during training we conducted a series of experiments, taking into account the existence or absence of coreference annotation. Specifically, for both models, for each entity we calculate the average similarity of its mentions with the other entity mentions, with and without the use of entity representations for GPT2E, and the average similarity between the entity representation and the entity mentions. We have limited the scope of the comparisons, using part-of-speech tags, to only nouns and proper nouns, as these will be the words that will be affected the most by our changes, given the dataset statistics presented in Section 4.2. Similarly, we calculated the average cosine similarity between the pronoun's representations of the two models as well as the differences between the two when entity representations are present. Based on the results displayed in Table 4 , we can infer that the mentions maintain their similarity when the coreference information are used during inference, while also have a higher average similarity than the respective mentions of the model trained without coreference annotations. However, taking into account the changing similarity scores between the entity representations and the entity mentions when we use coreference information during inference, we can conclude that there is a constant change to the representations. In the case of nouns and pronouns, that change brings the representations closer while in pronouns it has the opposite effect. Individual visual representations of the embeddings for GPT2E and GPT2 and a comparative visual representation between the two are included in the appendix section. The NER model, trained using word representations from GPT2E, achieved a mean average 3% F1 increase than the one trained with GPT2 word representations. We highlight four named entities in Table 5, which showed the biggest differences between the two trained models. Specifically, we observe that the named entities of PERSON and PRODUCT, which would be directly affected by the anaphoric information in the training process, showed the greatest increase and contributed the most to the per-formance boost. Subsequently, EVENT entities were more commonly mislabelled while using GPT2E representations. This behaviour is credited to the use of LOCATION terms to describe events (e.g. \"the Guangzhou Fair\") and to generic event terms that refer to different entities based on their context (e.g. \"new year\" can refer to a different year) which the baseline model was unable to handle correctly when the word representations were affected by entity information.",
"cite_spans": [],
"ref_spans": [
{
"start": 964,
"end": 971,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "In this paper we demonstrated a novel architecture to use coreference information in transformer-based neural language models in order to create richer representations and its effects on downstream tasks. We introduced an extension over the Transformer blocks of GPT2, labeled Entity-Transformer, that integrates coreference information to each entity mention. To that end, we also created an entity handling mechanism to create and update entity representations. Furthermore, as our proposed architecture extends over the basic Transformer block, it can be easily adapted to other Transformer-based language models, such as BERT, and also enables further research for Transformer-based language models with explicit entity decisions which have far outperformed their RNN counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "In our experiments we showcased that in terms of Language modelling, both GPT2E and GPT2, when trained on the same data, have indistinguishable performance in terms of Perplexity and GPT2E has a small computational cost that translates into a slightly longer training time. However, the difference in the similarity between entity mention representations suggests that fewer iterations and mentions of each word are required to achieve the results, assuming a large enough number of mentions. This is due to the extended contextual information present at each mention occurrence, in the form of entity representations, used when training the model. What is more, the differences in these representations directly translates to an increase in tasks such as Named Entity Recognition. As coreference is everpresent in natural language, with a better ability for a language model to understand and utilize the anaphoric phenomenon in text, we expect an increased performance in other tasks such as summarization and natural language inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "In order for language models to use coreference information, there are two requirements that need to be met. First, the models need to replace the Transformer blocks with the Entity-Transformer blocks introduced and also adopt the entity handling mechanism to make use of entity information. Second, annotated coreference information are required throughout the training corpus. While the changes described for the language models are trivial, language models require an enormous amount of training data, making it impossible to manually annotate coreference information. However, the entity handling mechanism we introduced is not affected by the lack of entity information in the training and is only boosted by the existence of them. As a result, even sparse annotations of high confidence will allow for improvements in the representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
},
{
"text": "In the future, we plan to extend our work, using noisy annotation provided by pretrained coreference resolvers so that we can train GPT2E to the WikiText dataset (Merity et al., 2018) , creating a comparable model with the original GPT2 and other state-of-the-art language models in a wider range of tasks. Furthermore, we aim to expand the abilities of our current approach to be able to make explicit entity decisions, similar to the previously cited work. For that purpose, attention head probing techniques, which have been found to model some anaphoric phenomena (Clark et al., 2019) , and transfer learning through weight initialization from a pre-trained GPT2 model will be investigated as they can contribute to significant improvements while needing less annotated training data.",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Merity et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 568,
"end": 588,
"text": "(Clark et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This research is co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme \"Human Resources Development, Education and Lifelong Learning\" in the context of the project \"Strengthening Human Resources Research Potential via Doctorate Research\" (MIS-5000432), implemented by the State Scholarships Foundation (IKY).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Layer normalization",
"authors": [
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Ba",
"suffix": ""
},
{
"first": "Jamie",
"middle": [
"Ryan"
],
"last": "Kiros",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.06450"
]
},
"num": null,
"urls": [],
"raw_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What does BERT look at? an analysis of BERT's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Accessing higherlevel representations in sequential transformers with feedback memory",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Thibaut",
"middle": [],
"last": "Lavril",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.09402"
]
},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, and Sainbayar Sukhbaatar. 2020. Accessing higher- level representations in sequential transformers with feedback memory. arXiv preprint arXiv:2002.09402.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Identity mappings in deep residual networks",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "European conference on computer vision",
"volume": "",
"issue": "",
"pages": "630--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dynamic entity representations in neural language models",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Chenhao",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Martschat",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1830--1839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity repre- sentations in neural language models. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1830-1839, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Entity decisions in neural language modelling: Approaches and problems",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Kunz",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Hardmeier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Computational Models of Reference, Anaphora and Coreference",
"volume": "",
"issue": "",
"pages": "15--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Kunz and Christian Hardmeier. 2019. Entity decisions in neural language modelling: Approaches and problems. In Proceedings of the Second Workshop on Computational Models of Reference, Anaphora and Coreference, pages 15-19.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270, San Diego, California, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learned in translation: Contextualized word vectors",
"authors": [
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6294--6305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextual- ized word vectors. In Advances in Neural Information Processing Systems, pages 6294-6305.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An analysis of neural language modeling at multiple scales",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.08240"
]
},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The LAMBADA dataset: Word prediction requiring a broad discourse context",
"authors": [
{
"first": "Denis",
"middle": [],
"last": "Paperno",
"suffix": ""
},
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Quan"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Sandro",
"middle": [],
"last": "Pezzelle",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1525--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denis Paperno, Germ\u00e1n Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern\u00e1ndez. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1525-1534, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL-Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving lan- guage understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Turing-nlg: A 17-billion-parameter language model by microsoft",
"authors": [
{
"first": "C",
"middle": [],
"last": "Rosset",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Rosset. 2019. Turing-nlg: A 17-billion-parameter language model by microsoft. Microsoft Blog.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Megatron-lm: Training multi-billion parameter language models using gpu model parallelism",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Shoeybi",
"suffix": ""
},
{
"first": "Mostofa",
"middle": [],
"last": "Patwary",
"suffix": ""
},
{
"first": "Raul",
"middle": [],
"last": "Puri",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Legresley",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Casper",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Catanzaro",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.08053"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dropout: a simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient transformers: A survey",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Dara",
"middle": [],
"last": "Bahri",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.06732"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Mostafa Dehghani, Dara Bahri, and Donald Metzler. 2020. Efficient transformers: A survey. arXiv preprint arXiv:2009.06732.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Ontonotes: A large training corpus for enhanced processing. Handbook of Natural Language Processing and Machine Translation",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Belvin",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Marcus, Martha Palmer, Robert Belvin, Sameer Pradhan, Lance Ramshaw, and Nianwen Xue. 2011. Ontonotes: A large training corpus for enhanced processing. Handbook of Natural Language Processing and Machine Translation. Springer, page 59.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Reference-aware language models",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1850--1859",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Phil Blunsom, Chris Dyer, and Wang Ling. 2017. Reference-aware language models. In Pro- ceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1850-1859, Copenhagen, Denmark, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5754--5764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xl- net: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754-5764.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "ERNIE: Enhanced language representation with informative entities",
"authors": [
{
"first": "Zhengyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1441--1451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced lan- guage representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441-1451, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "(left) Entity-Transformer Block (right) Entity Attention mechanism"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Training loss per step on the CoNLL 2012 dataset."
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Validation loss per step on the CoNLL 2012 dataset."
},
"TABREF0": {
"content": "<table><tr><td colspan=\"11\">X 1:11 \" The U.S. underestimated Noriega all along \" says Ambler Moss</td></tr><tr><td colspan=\"2\">E 1:11 \u2205 73</td><td>73</td><td>\u2205</td><td>82</td><td>\u2205</td><td>\u2205</td><td>\u2205</td><td>\u2205</td><td>50</td><td>50</td></tr><tr><td colspan=\"11\">X 12:23 a former Ambassador to Panama . \" He has mastered the art</td></tr><tr><td>E 12:23 50</td><td>50</td><td>50</td><td>50</td><td>50</td><td colspan=\"2\">\u2205 \u2205 82</td><td>\u2205</td><td>\u2205</td><td>\u2205</td><td>\u2205</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Data example from the CoNLL 2012 dataset, as formated for the task."
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"3\">Perplexity and Validation loss</td><td/></tr><tr><td/><td colspan=\"3\">on the CoNLL 2012 dataset</td><td/></tr><tr><td>Process</td><td>GPT2E PPL Loss</td><td>Time per step</td><td>GPT2 PPL Loss</td><td>Time per step</td></tr><tr><td colspan=\"2\">Training 5.52 1.71</td><td colspan=\"3\">0.290s 4.80 1.57 0.298s</td></tr><tr><td colspan=\"5\">Validation 1.20 0.187 0.290s 1.19 0.184 0.298s</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">Perplexity performance</td></tr><tr><td colspan=\"2\">on the LAMBADA dataset</td></tr><tr><td colspan=\"2\">Model Perplexity</td></tr><tr><td>GPT2E</td><td>196.81</td></tr><tr><td>GPT2</td><td>219.97</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF3": {
"content": "<table><tr><td>Experiments</td><td>GPT2E without Entities</td><td>GPT2E with Entities</td><td>GPT2</td></tr><tr><td>Average mention similarity NN,NNS,NNP,NNPS</td><td>0.7117</td><td>0.7117</td><td>0.6971</td></tr><tr><td>Average entity similarity NN,NNS,NNP,NNPS</td><td>0.0489</td><td>0.0513</td><td>-0.0164</td></tr><tr><td>Average mention similarity PRP,PRP$</td><td>0.8250</td><td>0.8250</td><td>0.7928</td></tr><tr><td>Average entity similarity PRP,PRP$</td><td>0.0619</td><td>0.0566</td><td>-0.0173</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Cosine similarity of mention representations and their entities in different scenarios"
},
"TABREF4": {
"content": "<table><tr><td>Labels</td><td>F1</td><td>GPT2 Prec</td><td>Recall</td><td>F1</td><td colspan=\"2\">GPT2E Precision Recall</td></tr><tr><td>PERSON</td><td>48%</td><td colspan=\"3\">95.5% 32.5 % 51.5%</td><td>94%</td><td>35.5%</td></tr><tr><td>PRODUCT</td><td>8%</td><td>33%</td><td colspan=\"2\">4.5 % 23.5%</td><td>90%</td><td>13.5%</td></tr><tr><td>EVENT</td><td>23%</td><td colspan=\"2\">83.5% 13.5%</td><td>15%</td><td>75%</td><td>8.5%</td></tr><tr><td>CARDINAL</td><td>28%</td><td colspan=\"2\">81.5% 17.5%</td><td>34%</td><td>75%</td><td>23%</td></tr><tr><td>NORP</td><td colspan=\"2\">44.5% 72.5%</td><td>36%</td><td>48%</td><td>79%</td><td>39.5%</td></tr><tr><td>Overall</td><td>54%</td><td>87%</td><td>39%</td><td>57%</td><td>88%</td><td>42%</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "NER performance using GPT2 and GPT2E representations as input."
}
}
}
}