ACL-OCL / Base_JSON /prefixC /json /crac /2020.crac-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:21:41.927274Z"
},
"title": "Sequence to Sequence Coreference Resolution",
"authors": [
{
"first": "Gorka",
"middle": [],
"last": "Urbizu",
"suffix": "",
"affiliation": {
"laboratory": "Ixa NLP group",
"institution": "University of the Basque Country",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Elhuyar",
"middle": [],
"last": "Fundation",
"suffix": "",
"affiliation": {
"laboratory": "Ixa NLP group",
"institution": "University of the Basque Country",
"location": {}
},
"email": ""
},
{
"first": "Ander",
"middle": [],
"last": "Soraluze",
"suffix": "",
"affiliation": {
"laboratory": "Ixa NLP group",
"institution": "University of the Basque Country",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Olatz",
"middle": [],
"last": "Arregi",
"suffix": "",
"affiliation": {
"laboratory": "Ixa NLP group",
"institution": "University of the Basque Country",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Until recently, coreference resolution has been a critical task on the pipeline of any NLP task involving deep language understanding, such as machine translation, chatbots, summarization or sentiment analysis. However, nowadays, those end tasks are learned end-to-end by deep neural networks without adding any explicit knowledge about coreference. Thus, coreference resolution is used less in the training of other NLP tasks or trending pretrained language models. In this paper we present a new approach to face coreference resolution as a sequence to sequence task based on the Transformer architecture. This approach is simple and universal, compatible with any language or dataset (regardless of singletons) and easier to integrate with current language models architectures. We test it on the ARRAU corpus, where we get 65.6 F1 CoNLL. We see this approach not as a final goal, but a means to pretrain sequence to sequence language models (T5) on coreference resolution.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Until recently, coreference resolution has been a critical task on the pipeline of any NLP task involving deep language understanding, such as machine translation, chatbots, summarization or sentiment analysis. However, nowadays, those end tasks are learned end-to-end by deep neural networks without adding any explicit knowledge about coreference. Thus, coreference resolution is used less in the training of other NLP tasks or trending pretrained language models. In this paper we present a new approach to face coreference resolution as a sequence to sequence task based on the Transformer architecture. This approach is simple and universal, compatible with any language or dataset (regardless of singletons) and easier to integrate with current language models architectures. We test it on the ARRAU corpus, where we get 65.6 F1 CoNLL. We see this approach not as a final goal, but a means to pretrain sequence to sequence language models (T5) on coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Coreference resolution is a Natural Language Processing (NLP) task which consists on identifying and clustering all the expressions referring to the same real-world entity in a text. NLP tasks that include language understanding such as text summarisation (Steinberger et al., 2016; Kope\u0107, 2019) , chatbots (Agrawal et al., 2017; Zhu et al., 2018) , sentiment analysis (Krishna et al., 2017) or machine translation (Werlen and Popescu-Belis, 2017; Ohtani et al., 2019) can benefit from coreference resolution. And until recently, coreference resolution has been a critical task on the pipelines of those systems.",
"cite_spans": [
{
"start": 256,
"end": 282,
"text": "(Steinberger et al., 2016;",
"ref_id": "BIBREF34"
},
{
"start": 283,
"end": 295,
"text": "Kope\u0107, 2019)",
"ref_id": "BIBREF16"
},
{
"start": 307,
"end": 329,
"text": "(Agrawal et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 330,
"end": 347,
"text": "Zhu et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 369,
"end": 391,
"text": "(Krishna et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 415,
"end": 447,
"text": "(Werlen and Popescu-Belis, 2017;",
"ref_id": "BIBREF39"
},
{
"start": 448,
"end": 468,
"text": "Ohtani et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, with the recent rising trend of building end-to-end deep neural networks, for any NLP task where the data available in that language or domain is huge, current models are able to learn the end task without any explicit training on coreference resolution. This is even more evident in the case of the huge unsupervisedly pretrained language models (LM) that are already able to resolve coreference (Clark et al., 2019; Tenney et al., 2019) , as BERT (Devlin et al., 2019) , RoBERTa , T5 (Raffel et al., 2019 ), or GPT3 (Brown et al., 2020 which are used to boost results on any downstream task.",
"cite_spans": [
{
"start": 406,
"end": 426,
"text": "(Clark et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 427,
"end": 447,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 458,
"end": 479,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 495,
"end": 515,
"text": "(Raffel et al., 2019",
"ref_id": "BIBREF30"
},
{
"start": 516,
"end": 546,
"text": "), or GPT3 (Brown et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Those pretrained language models have also improved notably the results obtained at coreference resolution. Combining the SotA neural coreference resolution system (Lee et al., 2017) at the time with pretrained language models (ELMo, BERT, SpanBERT) improves results by a large margin.",
"cite_spans": [
{
"start": 164,
"end": 182,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite coreference resolution was already useful in NLP end tasks before the irruption of deep learning in NLP, and getting very significant improvements on the results with it, nowadays most of the tasks that require deep language understanding, are approached without having coreference resolution in mind.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Src: Even the smallest person can change the course of history . Trg: In this paper, we introduce a new approach to solve coreference resolution as a sequence to sequence task (as shown in Table 1 ) using a Transformer (Vaswani et al., 2017) , that opens a path towards unifiying the approaches used in coreference resolution with the trending pretrained LMs and other NLP tasks, while simplifying the neural architecture used for coreference resolution.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(0 0) (1 (2)|1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We test our approach on the English ARRAU corpus (Uryupina et al., 2020) , which includes singletons. We train our model on coreference resolution as a sequence to sequence task, where the neural network learns to produce the coreference relations as output from the raw text in the source.",
"cite_spans": [
{
"start": 49,
"end": 72,
"text": "(Uryupina et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the following Section 2 we review the state of the art of the field. In Section 3 we describe how we approached coreference resolution as a sequence to sequence task, we present the neural architecture and corpora we used. In Section 4 we report our results, and lastly, we present our conlusions and future work in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The SotA for English coreference resolution, improved a lot since the revolution of deep learning in NLP. The first end-to-end neural model (Lee et al., 2017) obtained big improvements over previous models.",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Lee et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the Art",
"sec_num": "2"
},
{
"text": "Since then, pretrained LMs improved a lot those results; adding ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019) and SpanBert (Joshi et al., 2020) to the model, improved by a large margins the SotA at the moment Kantor and Globerson, 2019; Joshi et al., 2020) .",
"cite_spans": [
{
"start": 69,
"end": 90,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 98,
"end": 119,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 133,
"end": 153,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF13"
},
{
"start": 219,
"end": 246,
"text": "Kantor and Globerson, 2019;",
"ref_id": "BIBREF14"
},
{
"start": 247,
"end": 266,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the Art",
"sec_num": "2"
},
{
"text": "Furthermore, we would like to underline different approaches as reinforcement learning (Fei et al., 2019) and neural MCDM and fuzzy weighting techniques , which improved results.",
"cite_spans": [
{
"start": 87,
"end": 105,
"text": "(Fei et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the Art",
"sec_num": "2"
},
{
"text": "There have been only two works which already have tried to combine language models and coreference resolution at training. In the first one, T5 (Raffel et al., 2019) , they use coreference resolution among other tasks to train a neural language model on text to text, but the coreference task is approached as a simple binary mention-pair task, which does not reflect all the advances done at resolving coreference. In the second one, CorefQA (Wu et al., 2020) , they adress coreference resolution as query-based span prediction for which they convert coreference resolution into a QA task, where the model has to find the coreferential mentions in the text. Although they get the best results obtained to this day, their approach still uses a windowing technique of length 512, and needs to create questions automatically from the text. We should keep in mind that, apart of the well studied English language, there are lots of other less researched languages. Yet we already have neural models for some of those languages: Polish (Nito\u0144 et al., 2018) , Japanese (Shibata and Kurohashi, 2018) , French (Grobol, 2019) , Basque (Urbizu et al., 2019) , Telegu (Annam et al., 2019) , Russian (Sboev et al., 2020) Persian (Sahlani et al., 2020) and cross-linguals (Cruz et al., 2018; Kundu et al., 2018) with varied results depending on corpus sizes and architectures.",
"cite_spans": [
{
"start": 144,
"end": 165,
"text": "(Raffel et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 443,
"end": 460,
"text": "(Wu et al., 2020)",
"ref_id": "BIBREF40"
},
{
"start": 1032,
"end": 1052,
"text": "(Nito\u0144 et al., 2018)",
"ref_id": "BIBREF22"
},
{
"start": 1064,
"end": 1093,
"text": "(Shibata and Kurohashi, 2018)",
"ref_id": "BIBREF33"
},
{
"start": 1103,
"end": 1117,
"text": "(Grobol, 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1127,
"end": 1148,
"text": "(Urbizu et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 1158,
"end": 1178,
"text": "(Annam et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 1189,
"end": 1209,
"text": "(Sboev et al., 2020)",
"ref_id": "BIBREF32"
},
{
"start": 1218,
"end": 1240,
"text": "(Sahlani et al., 2020)",
"ref_id": "BIBREF31"
},
{
"start": 1260,
"end": 1279,
"text": "(Cruz et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 1280,
"end": 1299,
"text": "Kundu et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State of the Art",
"sec_num": "2"
},
{
"text": "Coreference resolution has been historically divided in two subtasks. The first one is mention detection, where posible candidates for a mention are located in the text. The second one would be to find those which have coreferential relations, among the mentions. This second task has been approached as a clustering problem, where mention-pair models evolved into entity-mention models, and their respectives ranking models. Some of this approaches have issues with making the correct global decisions, and those who handle this more appropriately, have higher computational cost. In the following subsection, we present our approach, which solves these two subtasks at once in a simpler way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence to Sequence Coreference Resolution",
"sec_num": "3"
},
{
"text": "There are many ways to annotate or indicate coreference relations on a text, such as using 2 columns, which was used on the Ontonotes corpus (Pradhan et al., 2007) for the CONLL task (Pradhan et al., 2011; Pradhan et al., 2012) . On the left we have the raw text word by word, and on the right, the coreference relations expressed in a parenthetical structure, were parenthesis are used to delimitate mentions, and numbers to refer the coreference clusters that the mentions belong.",
"cite_spans": [
{
"start": 141,
"end": 163,
"text": "(Pradhan et al., 2007)",
"ref_id": "BIBREF26"
},
{
"start": 183,
"end": 205,
"text": "(Pradhan et al., 2011;",
"ref_id": "BIBREF27"
},
{
"start": 206,
"end": 227,
"text": "Pradhan et al., 2012)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.1"
},
{
"text": "Text: Coreference: you (0) love me",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.1"
},
{
"text": "(1) This annotation system shows that the task is similar to sequence-labeling tasks, where the labels of the second row are not discrete. To handle this problem, we propose a sequence to sequence approach. In source we would have the raw text, and in the target, the coreference annotation corresponding to the source text in the parenthetical structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.1"
},
{
"text": "To make the task easier to learn, as there are many equivalent ways to represent the same coreference relations, we rewrite all the numbers referring to coreference clusters in the training dataset, with ascendent numbers starting from 0, from left to right, keeping the coreference relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3.1"
},
{
"text": "We choose the architecture of Transformer, as it gives good results for many sequence to sequence tasks. Although keeping source and target sequences of the same length helps the model to create the outputs of the correct length, this creates the problem of huge vocabularies in source and target, which makes training the model harder, and more memory consuming.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "3.2"
},
{
"text": "To solve this issue, we use fixed vocabularies on source and target sequences. On source, we use BPE (Bojanowski et al., 2017) to segment words in subword units, with which we get a small closed vocabulary of 16K tokens. On target, we divide the labels of coreference resolution which contains more than one coreference relation within it, so that we avoid conplex labels, as (8)|122)|68)|128), which are hard to learn correctly: (8) | 122) | 68) | 128). Doing this, we decrease the size of the target vocabulary significantly (1.7K).",
"cite_spans": [
{
"start": 101,
"end": 126,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "3.2"
},
{
"text": "Src: Even the small@@ est person can change the course of history . Trg: As we can see in the example above, the aligment that we got previously is gone, so the model will have to learn to align source and target tokens, which a Transformer should do easily, as seen in tasks such as machine translation with this architecture. Furthermore, with those changes the source and target vocabularies sizes decrease a lot, making easier to understand the text and produce correct target tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "3.2"
},
{
"text": "(0 0) (1 (2) | 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "3.2"
},
{
"text": "We do not use any pretrained word embeddings or LMs, or any other linguistic, distance or speaker features. We have choosen fairseq implementation of the Transformer with standard hyperparameters. We set the max length of the source and target sequences at 1024. As coreference resolution is a document level task, it might happen that the document that we want to process has more than 1024 tokens in source or target after applying BPE and labels division. To handle that, a model with longer sequences should be trained (increasing significantly memory requirements), or a windowing strategy could be used. But we do not try any of this here, to keep computational costs low 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer Model",
"sec_num": "3.2"
},
{
"text": "We tested our approach on the ARRAU corpus (Uryupina et al., 2020) , an English dataset which includes singletons. They had been ignored due to the division on mention detection and clustering tasks, and the specific corpora made for the second one. We train our Transformer model just to carry out both tasks at once. We used all coreference relations of the dataset. The corpus has 350K words, and its already divided on train, dev and test subsets.",
"cite_spans": [
{
"start": 43,
"end": 66,
"text": "(Uryupina et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.3"
},
{
"text": "As we do not add any pretrained word embeddings or any LMs to the model, the ARRAU corpus is not big enough to learn the task of language understanding in the encoder part and it has a limited vocabulary in the training. Thus, we used an auxiliary corpus for the training. We chose PreCo corpus, which is an English coreference corpus of over 10M words, which also includes singletons (Chen et al., 2018) . Both datasets were converted to the mentioned two column format from their respective enriched annotations.",
"cite_spans": [
{
"start": 385,
"end": 404,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.3"
},
{
"text": "We used data augmentation to increase the amount of training instances. For this purpose, we took all the combinations of consecutive sentences for the training. Given the document S A \u2212 S Z , where S is a sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "S A , S A -S B , ..., S A -S B -S C -...-S Z ; S B , S B -S C , ... S B -S C -S D -...-S Z ; ...; S Y , S Y -S Z ; S Z .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "With this technique, we do not improve much the dataset for source sequences, as it would be the same sentences repeated in different lengths. However, the repeated parts of the sequences in the source, would have their coreference relations represented by different numbers in the target sequences: : Training sequences after data augmentation, and its effect on the target cluster numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S A -S B -S C Src: You love cats . I love cats . My dog hates cats . S A -S B -S C Trg: (0) (1) (2) (1) (3 | (2) 3) (1) S B -S C Src: I love cats . My dog hates cats . S B -S C Trg: (0) (1) (2 | (0) 2) (1) S C Src: My dog hates cats . S C Trg: (0 | (1) 0)",
"eq_num": "(2)"
}
],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "Furthermore, having sequences of a single sentence in the training, makes the beginning of the learning process easier. Later, the model will be able to learn to resolve coreference for whole documents at once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "3.4"
},
{
"text": "Once we get the output prediction sequences, we need to post-process a bit the output with the 3 following processes. First, we correct the unclosed (or unopened) patenthesis or mentions, deleting them. Then, we group the different coreference relations referring to the same token again (just removing the space between each of the | in the output). Finally, we correct the length of the output sequence, removing tokens, or adding extra \" \" tokens at the end until it matches the length of the source text. We can see the changes made to the predicted sequence at post-procesing in the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "3.5"
},
{
"text": "Even the small@@ est person can change the course of history . Trg: Table 7 : Example of the post-procesing applied to the predicted sequences.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "(0 0) (1 (2) | 1) Pred: (0 0) (1 (2 (3) | 1) Post: (0 0) (1 (3)|1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "1 We trained the model on a single Nvidia Rtx 2080Ti GPU (11GB) for 24h.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "For the evaluation of our new sequence to sequence approach and the transformer model we built, we use the coreference official scorer (Pradhan et al., 2014) to get the results of the most used metrics on the task on the ARRAU testing split. We obtain 77.2 F1 at mention detection (MD), 64.9 F1 at MUC, 66.5 F1 at B 3 , 65.3 F1 at CEAF e and 65.6 F1 on the CoNLL metric. They are quite good results for a simple approach which does not use any external information as pretrained word embeddings or LMs, or any linguistic, distance or speaker features other than the auxiliary dataset we used, which just added the amount of raw text and its coreferential relations we had. Our model is able to detect most of the mentions, including singletons, and it does cluster correctly correferential mentions to a certain extent, including those that are at a very long distance 2 . Table 8 : Our F1 results in comparison with previous best results on the ARRAU dataset.",
"cite_spans": [
{
"start": 135,
"end": 157,
"text": "(Pradhan et al., 2014)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 873,
"end": 880,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The best results on the ARRAU dataset are those presented at Yu et al. (2020) . Results obtained in this work are not completely comparable with our work, as we do not process documents longer than 1024 tokens (\u223c800 words, keeping 72% of the documents), while they only test their system with the RST subset of the test set. However, we include the comparison in table 8, to put our results into context, and as we can see, we are not able to match their results.",
"cite_spans": [
{
"start": 61,
"end": 77,
"text": "Yu et al. (2020)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "All in all, in this work we present a novel approach, as far as we know, the first time where coreference resolution has been learned as a simple sequence to sequence task, using just a Transformer, an architecture that rules the NLP field. We got 65.6 F1 CoNLL on the ARRAU corpus, and despite not getting the best results on the dataset, we proved that a Transformer is enough to learn the task, from raw text, without any features or pre-trained word-embeddings or LMs. The results obtained are quite good, as this approach have room for improvements at architecture level, hyperparameter tuning, and the integration of pretrained LMs. This approach may help at unifing the coreference resolution with other NLP models, where this task could be used at pretraining sequence to sequence LMs (T5). Our code and model are available at: https://github.com/gorka96/text2cor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "There are many aspects of this approach worth to continue researching. To begin with, we limited the maximum length of the sequences to 1024 tokens for simplicity, nevertheless, to be able to process longer documents, we will need to train Transformer models with longer maximum positions. To handle the increment in memory and computational costs, architectures that do not use full attention as reformer (Kitaev et al., 2020) or longformer (Beltagy et al., 2020 ) could be considered. Moreover, we would like to verify that this method is as universal as we said here, trying datasets without singletons, lowresourced languages, and multilingual or cross-lingual settings. Finally, using this approach to train a sequence to sequence language model like T5, would be interesting.",
"cite_spans": [
{
"start": 406,
"end": 427,
"text": "(Kitaev et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 442,
"end": 463,
"text": "(Beltagy et al., 2020",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "5"
},
{
"text": "Sample of the output: https://github.com/gorka96/text2cor/blob/main/pred_example.txt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was partially supported by the Department of Industry of the Basque Government (Deep-Text project, KK-2020/00088) and by the European Commission (LINGUATEC project, EFA227/16). We thank the three anonymous reviewers whose comments and suggestions contributed to improve this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Are word embedding and dialogue act class-based features useful for coreference resolution in dialogue",
"authors": [
{
"first": "Samarth",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Joe",
"middle": [
"Cheri"
],
"last": "Ross",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Harshawardhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wabgaonkar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of PACLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samarth Agrawal, Aditya Joshi, Joe Cheri Ross, Pushpak Bhattacharyya, and Harshawardhan M Wabgaonkar. 2017. Are word embedding and dialogue act class-based features useful for coreference resolution in dialogue. In Proceedings of PACLING.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Anaphora resolution in dialogue systems for south asian languages",
"authors": [
{
"first": "Vinay",
"middle": [],
"last": "Annam",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Koditala",
"suffix": ""
},
{
"first": "Radhika",
"middle": [],
"last": "Mamidi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.09994"
]
},
"num": null,
"urls": [],
"raw_text": "Vinay Annam, Nikhil Koditala, and Radhika Mamidi. 2019. Anaphora resolution in dialogue systems for south asian languages. arXiv preprint arXiv:1911.09994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Longformer: The long-document transformer",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.05150"
]
},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language models are few-shot learners",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Tom B Brown",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Mann",
"suffix": ""
},
{
"first": "Melanie",
"middle": [],
"last": "Ryder",
"suffix": ""
},
{
"first": "Jared",
"middle": [],
"last": "Subbiah",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Dhariwal",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Neelakantan",
"suffix": ""
},
{
"first": "Girish",
"middle": [],
"last": "Shyam",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Sastry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Askell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.14165"
]
},
"num": null,
"urls": [],
"raw_text": "Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Nee- lakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Preco: A large-scale dataset in preschool vocabulary for coreference resolution",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Yuille",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Rong",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "172--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong Chen, Zhenhua Fan, Hao Lu, Alan Yuille, and Shu Rong. 2018. Preco: A large-scale dataset in preschool vocabulary for coreference resolution. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 172-181.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "What does bert look at? an analysis of bert's attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does bert look at? an analysis of bert's attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpret- ing Neural Networks for NLP, pages 276-286.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Exploring Spanish corpora for Portuguese coreference resolution",
"authors": [
{
"first": "Andr\u00e9 Ferreira",
"middle": [],
"last": "Cruz",
"suffix": ""
},
{
"first": "Gil",
"middle": [],
"last": "Rocha",
"suffix": ""
},
{
"first": "Henrique",
"middle": [
"Lopes"
],
"last": "Cardoso",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS)",
"volume": "",
"issue": "",
"pages": "290--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 Ferreira Cruz, Gil Rocha, and Henrique Lopes Cardoso. 2018. Exploring Spanish corpora for Portuguese coreference resolution. In 2018 Fifth International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 290-295. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "End-to-end deep reinforcement learning based coreference resolution",
"authors": [
{
"first": "Hongliang",
"middle": [],
"last": "Fei",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dingcheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "660--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongliang Fei, Xu Li, Dingcheng Li, and Ping Li. 2019. End-to-end deep reinforcement learning based corefer- ence resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 660-665.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural coreference resolution with limited lexical context and explicit mention detection for oral french",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Grobol",
"suffix": ""
}
],
"year": 2019,
"venue": "Second Workshop on Computational Models of Reference, Anaphora and Coreference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Grobol. 2019. Neural coreference resolution with limited lexical context and explicit mention detection for oral french. In Second Workshop on Computational Models of Reference, Anaphora and Coreference, page 8.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Coreference resolution using neural mcdm and fuzzy weighting technique",
"authors": [
{
"first": "Samira",
"middle": [],
"last": "Hourali",
"suffix": ""
},
{
"first": "Morteza",
"middle": [],
"last": "Zahedi",
"suffix": ""
},
{
"first": "Mansour",
"middle": [],
"last": "Fateh",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Computational Intelligence Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samira Hourali, Morteza Zahedi, and Mansour Fateh. 2020. Coreference resolution using neural mcdm and fuzzy weighting technique. International Journal of Computational Intelligence Systems.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bert for coreference resolution: Baselines and analysis",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5807--5812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Omer Levy, Luke Zettlemoyer, and Daniel S Weld. 2019. Bert for coreference resolution: Baselines and analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5807- 5812.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Spanbert: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Weld",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Coreference resolution with entity equalization",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Kantor",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "673--677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Kantor and Amir Globerson. 2019. Coreference resolution with entity equalization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 673-677.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Reformer: The efficient transformer",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Kitaev",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Anselm",
"middle": [],
"last": "Levskaya",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2001.04451"
]
},
"num": null,
"urls": [],
"raw_text": "Nikita Kitaev, \u0141ukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Three-step coreference-based summarizer for polish news texts",
"authors": [
{
"first": "Mateusz",
"middle": [],
"last": "Kope\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Poznan Studies in Contemporary Linguistics",
"volume": "55",
"issue": "",
"pages": "397--443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mateusz Kope\u0107. 2019. Three-step coreference-based summarizer for polish news texts. Poznan Studies in Con- temporary Linguistics, 55(2):397-443.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A feature based approach for sentiment analysis using svm and coreference resolution",
"authors": [
{
"first": "",
"middle": [],
"last": "Hari Krishna",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Rahamathulla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akbar",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Conference on Inventive Communication and Computational Technologies (ICICCT)",
"volume": "",
"issue": "",
"pages": "397--399",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Hari Krishna, K Rahamathulla, and Ali Akbar. 2017. A feature based approach for sentiment analysis using svm and coreference resolution. In 2017 International Conference on Inventive Communication and Computational Technologies (ICICCT), pages 397-399. IEEE.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Neural cross-lingual coreference resolution and its application to entity linking",
"authors": [
{
"first": "Gourab",
"middle": [],
"last": "Kundu",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Sil",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "395--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gourab Kundu, Avi Sil, Radu Florian, and Wael Hamza. 2018. Neural cross-lingual coreference resolution and its application to entity linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 395-400.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "End-to-end Neural Coreference Resolution",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "188--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end Neural Coreference Resolution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Higher-order coreference resolution with coarse-to-fine inference",
"authors": [
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "687--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 687-692.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Deep neural networks for coreference resolution for Polish",
"authors": [
{
"first": "Bart\u0142omiej",
"middle": [],
"last": "Nito\u0144",
"suffix": ""
},
{
"first": "Pawe\u0142",
"middle": [],
"last": "Morawiecki",
"suffix": ""
},
{
"first": "Maciej",
"middle": [],
"last": "Ogrodniczuk",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "395--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bart\u0142omiej Nito\u0144, Pawe\u0142 Morawiecki, and Maciej Ogrodniczuk. 2018. Deep neural networks for coreference resolution for Polish. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), pages 395-400.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Context-aware neural machine translation with coreference information",
"authors": [
{
"first": "Takumi",
"middle": [],
"last": "Ohtani",
"suffix": ""
},
{
"first": "Hidetaka",
"middle": [],
"last": "Kamigaito",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Workshop on Discourse in Machine Translation",
"volume": "",
"issue": "",
"pages": "45--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, and Manabu Okumura. 2019. Context-aware neural ma- chine translation with coreference information. In Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), pages 45-50.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proc. of NAACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "OntoNotes: A Unified Relational Semantic Representation",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the International Conference on Semantic Computing, (ICSC '07)",
"volume": "",
"issue": "",
"pages": "517--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Eduard Hovy, Mitch Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2007. OntoNotes: A Unified Relational Semantic Representation. In Proceedings of the International Conference on Semantic Computing, (ICSC '07), pages 517-526, Washington, DC, USA. IEEE Computer Society.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, CONLL Shared Task '11",
"volume": "",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 Shared Task: Modeling Unrestricted Coreference in OntoNotes. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, CONLL Shared Task '11, pages 1-27, Portland, Oregon.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2012,
"venue": "Joint Conference on EMNLP and CoNLL -Shared Task",
"volume": "",
"issue": "",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes. In Joint Conference on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Scoring Coreference Partitions of Predicted Mentions: A Reference Implementation",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "30--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Eduard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring Coreference Partitions of Predicted Mentions: A Reference Implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30-35, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Coreference resolution using semantic features and fully connected neural network in the persian language",
"authors": [
{
"first": "Hossein",
"middle": [],
"last": "Sahlani",
"suffix": ""
},
{
"first": "Maryam",
"middle": [],
"last": "Hourali",
"suffix": ""
},
{
"first": "Behrouz",
"middle": [],
"last": "Minaei-Bidgoli",
"suffix": ""
}
],
"year": 2020,
"venue": "International Journal of Computational Intelligence Systems",
"volume": "13",
"issue": "1",
"pages": "1002--1013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossein Sahlani, Maryam Hourali, and Behrouz Minaei-Bidgoli. 2020. Coreference resolution using semantic features and fully connected neural network in the persian language. International Journal of Computational Intelligence Systems, 13(1):1002-1013.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Deep neural networks ensemble with word vector representation models to resolve coreference resolution in russian",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sboev",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rybka",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gryaznov",
"suffix": ""
}
],
"year": 2020,
"venue": "Advanced Technologies in Robotics and Intelligent Systems",
"volume": "",
"issue": "",
"pages": "35--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Sboev, R Rybka, and A Gryaznov. 2020. Deep neural networks ensemble with word vector representation models to resolve coreference resolution in russian. In Advanced Technologies in Robotics and Intelligent Systems, pages 35-44. Springer.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Entity-centric joint modeling of japanese coreference resolution and predicate argument structure analysis",
"authors": [
{
"first": "Tomohide",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "579--589",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomohide Shibata and Sadao Kurohashi. 2018. Entity-centric joint modeling of japanese coreference resolution and predicate argument structure analysis. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 579-589.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Coreference applications to summarization",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Mijail",
"middle": [],
"last": "Kabadjov",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2016,
"venue": "Anaphora Resolution",
"volume": "",
"issue": "",
"pages": "433--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Steinberger, Mijail Kabadjov, and Massimo Poesio. 2016. Coreference applications to summarization. In Anaphora Resolution, pages 433-456. Springer.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Bert rediscovers the classical nlp pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4593--4601",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. Bert rediscovers the classical nlp pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593-4601.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep cross-lingual coreference resolution for lessresourced languages: The case of basque",
"authors": [
{
"first": "Gorka",
"middle": [],
"last": "Urbizu",
"suffix": ""
},
{
"first": "Ander",
"middle": [],
"last": "Soraluze",
"suffix": ""
},
{
"first": "Olatz",
"middle": [],
"last": "Arregi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Computational Models of Reference, Anaphora and Coreference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gorka Urbizu, Ander Soraluze, and Olatz Arregi. 2019. Deep cross-lingual coreference resolution for less- resourced languages: The case of basque. In Proceedings of the 2nd Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2019), co-located with NAACL 2019.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Annotating a broad range of anaphoric phenomena, in a variety of genres: the arrau corpus",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Antonella",
"middle": [],
"last": "Bristot",
"suffix": ""
},
{
"first": "Federica",
"middle": [],
"last": "Cavicchio",
"suffix": ""
},
{
"first": "Francesca",
"middle": [],
"last": "Delogu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kepa",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Rodriguez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Natural Language Engineering",
"volume": "26",
"issue": "1",
"pages": "95--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Uryupina, Ron Artstein, Antonella Bristot, Federica Cavicchio, Francesca Delogu, Kepa J Rodriguez, and Massimo Poesio. 2020. Annotating a broad range of anaphoric phenomena, in a variety of genres: the arrau corpus. Natural Language Engineering, 26(1):95-128.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Using coreference links to improve spanish-to-english machine translation",
"authors": [
{
"first": "Lesly",
"middle": [],
"last": "Miculicich Werlen",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes",
"volume": "",
"issue": "",
"pages": "30--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Using coreference links to improve spanish-to-english machine translation. In Proceedings of the 2nd Workshop on Coreference Resolution Beyond OntoNotes (COR- BON 2017), pages 30-40.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Corefqa: Coreference resolution as query-based span prediction",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6953--6963",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, and Jiwei Li. 2020. Corefqa: Coreference resolution as query-based span prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6953-6963.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A cluster ranking model for full anaphora resolution",
"authors": [
{
"first": "Juntao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Uma",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juntao Yu, Alexandra Uma, and Massimo Poesio. 2020. A cluster ranking model for full anaphora resolution. In Proceedings of The 12th Language Resources and Evaluation Conference, pages 11-20.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Lingke: a fine-grained multi-turn chatbot for customer service",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhuosheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jiangtong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yafang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "108--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengfei Zhu, Zhuosheng Zhang, Jiangtong Li, Yafang Huang, and Hai Zhao. 2018. Lingke: a fine-grained multi-turn chatbot for customer service. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 108-112.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "al., 2017) 68.8 73.0(Fei et al., 2019) 73.8(Kantor and Globerson, 2019) 76.6 77.1(Joshi et al., 2020) 79.6 (Hourali et al., 2020) 80.0(Wu et al., 2020) 83.1",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF1": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
},
"TABREF2": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Source: You love me</td></tr><tr><td>Target: (0)</td><td>(1)</td></tr></table>",
"type_str": "table",
"text": "Two column annotation.",
"html": null
},
"TABREF3": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Sequence to sequence task.",
"html": null
},
"TABREF4": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Example of source and target sequences.",
"html": null
},
"TABREF5": {
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null
}
}
}
}