ACL-OCL / Base_JSON /prefixD /json /disrpt /2021.disrpt-1.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:40:28.361472Z"
},
"title": "Multi-lingual Discourse Segmentation and Connective Identification: MELODI at DISRPT2021",
"authors": [
{
"first": "Morteza",
"middle": [],
"last": "Ezzabady",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "ANITI"
}
},
"email": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "ANITI"
}
},
"email": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CNRS",
"location": {
"region": "ANITI"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an approach for discourse segmentation and discourse connective identification, both at the sentence and document level, within the DISRPT 2021 shared task, a multilingual and multi-formalism evaluation campaign. 1 Building on the most successful architecture from the 2019 similar shared task, we leverage datasets in the same or similar languages to augment training data and improve on the best systems from the previous campaign on 3 out of 4 subtasks, with a mean improvement on all 16 datasets of 0.85%. Within the Disrpt 21 campaign the system ranks 3rd overall, very close to the 2nd system, but with a significant gap with respect to the best system, which uses a rich set of additional features. The system is nonetheless the best on languages that benefited from crosslingual training on sentence internal segmentation (German and Spanish).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an approach for discourse segmentation and discourse connective identification, both at the sentence and document level, within the DISRPT 2021 shared task, a multilingual and multi-formalism evaluation campaign. 1 Building on the most successful architecture from the 2019 similar shared task, we leverage datasets in the same or similar languages to augment training data and improve on the best systems from the previous campaign on 3 out of 4 subtasks, with a mean improvement on all 16 datasets of 0.85%. Within the Disrpt 21 campaign the system ranks 3rd overall, very close to the 2nd system, but with a significant gap with respect to the best system, which uses a rich set of additional features. The system is nonetheless the best on languages that benefited from crosslingual training on sentence internal segmentation (German and Spanish).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Discourse segmentation, the separation of a text or conversation in elementary units that make up the arguments of the rhetorical structure of a text, has long been a neglected step in discourse analysis, considered easy and generally assumed as given in discourse parsing studies, where the focus is to predict the rhetorical structure of a document, a labelled relational structure, with properties dependent on the theorerical framework considered, Rhetorical Structure Theory (RST, Mann and Thompson, 1988) , Segmented Discourse Representation Theory (SDRT, Asher and Lascarides, 2003) , or the Penn Discourse Treebank (PDTB, Prasad et al., 2008) .",
"cite_spans": [
{
"start": 480,
"end": 510,
"text": "(RST, Mann and Thompson, 1988)",
"ref_id": null
},
{
"start": 562,
"end": 589,
"text": "Asher and Lascarides, 2003)",
"ref_id": "BIBREF2"
},
{
"start": 623,
"end": 650,
"text": "(PDTB, Prasad et al., 2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However this important step has generated more interest recently, as illustrated by the 2019 shared task at the Discourse Relation Parsing and Treebanking (DISRPT) workshop .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This campaign made available several existing corpora in different language in a common format, expressing the task as a sequence tagging problem, where tokens are to be classified as beginning a segment or not, or, in the case of PDTB corpora, being part of a discourse connective signalling a relation between textual arguments. Segmentation in itself has also shown a lot of potential as an auxiliary task in machine translation (Chen et al., 2020) and summarization (Xu et al., 2020) , independently of discourse parsing.",
"cite_spans": [
{
"start": 432,
"end": 451,
"text": "(Chen et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 470,
"end": 487,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "DISRPT 2021 shared task reproduces the same setting as DISRPT 2019, with some additional data and minor modifications of the original datasets, segmentation as task 1, connective identification as task 2, and adds the prediction of relations between segments, assuming those are known, as task 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The following shows examples from the English datasets, illustrating respectively task 1 and task 2, with intended units to recover marked between brackets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 [Three seats currently are vacant] [and three others are likely to be filled within a few years] (...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 [But] [in the end] his resignation as Chancellor of the Exchequer may be a good thing (...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the first case, tokens \"Three\" and \"and\" would be marked as segment beginnings. In the second case, where the target connectives are \"but\" and \"in the end\", beginning tokens \"but\" and \"in\" would be marked \"B\" (begin) and \"the\", \"end\", would be marked as \"I\" (inside). In both examples, other tokens would be marked \"out\". Since sentences are almost always discourse units in existing frameworks, segmentation can be seen as two sub-problems: detecting sentences, and detecting intra-sentence segment boundaries. To reflect this, the 2019 shared task introduced two sub-tasks for segmentation and connective identification: either sentence-level segmentation, with sentence boundaries given (when annotated, or provided with a sentence splitter otherwise), also informed by syntactic parsing of sentences, or without any of that information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the exception of systems presented at DIS-RPT 2019 (Bourgonje and Sch\u00e4fer, 2019; Yu et al., 2019; Muller et al., 2019) , existing work on segmentation always assumed gold sentences, e.g. (Wang et al., 2018; Lukasik et al., 2020) .",
"cite_spans": [
{
"start": 56,
"end": 85,
"text": "(Bourgonje and Sch\u00e4fer, 2019;",
"ref_id": "BIBREF3"
},
{
"start": 86,
"end": 102,
"text": "Yu et al., 2019;",
"ref_id": "BIBREF38"
},
{
"start": 103,
"end": 123,
"text": "Muller et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 192,
"end": 211,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 212,
"end": 233,
"text": "Lukasik et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One interesting aspect of such a task is the availability of comparable data in different languages. This has been leveraged in the past for segmentation (Braud et al., 2017b) , but not in the past 2019 campaign, where the best system relied on fine-tuning a contextual language model for each language separately, albeit using the same multilingual embedding model (Muller et al., 2019) .",
"cite_spans": [
{
"start": 154,
"end": 175,
"text": "(Braud et al., 2017b)",
"ref_id": "BIBREF5"
},
{
"start": 366,
"end": 387,
"text": "(Muller et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Here we propose to build on the previous DIS-RPT best system and exploit the availability of multiple corpora for the same language, or the same family of languages (romance, germanic) to augment training of dedicated models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Combining this approach with a few adjustments to the base model, we manage to improve on many datasets compare to the previous best DISRPT systems, with a mean difference in F1 score of 0.46% and 1.24% on segment boundary detection for sentence and document level respectively, and 3.07% on connective detection for sentences (we didn't improve results at document level for discourse connectives).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Within the Disrpt 21 campaign the system ranks 3rd overall, very close to the 2nd system, but with a significant gap with respect to the best system, which uses a rich set of additional features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our code and instructions to reproduce the experiments are available online. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discourse segmentation appeared as an NLP task with the creation of the first annotated RST documents in English, and was primarily rule-based (Marcu, 2000) . Since then the literature on discourse parsing generally assumed that elementary discourse units (discourse segments) were given, with only a handful of exceptions (Soricut and Marcu, 2003; Fisher and Roark, 2007; Tofiloski et al., 2009; Hernault et al., 2010; Joty et al., 2015) , until more recent neural-based work (Wang et al., 2018; Lukasik et al., 2020) , still at the sentence level, and always on English or Mandarin. Only the work of (Braud et al., 2017b,c) have considered more varied languages, after the creation of a few different datasets in the past ten years (see the Data section below).",
"cite_spans": [
{
"start": 143,
"end": 156,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF22"
},
{
"start": 323,
"end": 348,
"text": "(Soricut and Marcu, 2003;",
"ref_id": "BIBREF30"
},
{
"start": 349,
"end": 372,
"text": "Fisher and Roark, 2007;",
"ref_id": "BIBREF15"
},
{
"start": 373,
"end": 396,
"text": "Tofiloski et al., 2009;",
"ref_id": "BIBREF34"
},
{
"start": 397,
"end": 419,
"text": "Hernault et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 420,
"end": 438,
"text": "Joty et al., 2015)",
"ref_id": "BIBREF19"
},
{
"start": 477,
"end": 496,
"text": "(Wang et al., 2018;",
"ref_id": "BIBREF36"
},
{
"start": 497,
"end": 518,
"text": "Lukasik et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 602,
"end": 625,
"text": "(Braud et al., 2017b,c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The DISRPT 2019 workshop introduced a more general evaluation framework for discourse segmentation with a shared task considering multilingual data, and segmentation both at the sentence and document level ). The best system (Muller et al., 2019) at both granularities (sentence and document) used a sequential tagging model fine-tuned on contextual embeddings.",
"cite_spans": [
{
"start": 225,
"end": 246,
"text": "(Muller et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Multi-lingual discourse parsing is also becoming more popular, see for instance (Braud et al., 2017a; Chen et al., 2020) , in which it is seen as a form of multi-task learning problem, but this was not applied to discourse segmentation. In other NLP subfields, leveraging availability of corpora in different languages for the same tasks is an active area of research, with different strategies for combining tasks and languages, using meta-learning and complex sampling strategies (Nooralahzadeh et al., 2020; Tarunesh et al., 2021) . A simpler approach that inspired us here, due to (Dehouck and Denis, 2019) , is to use the relations between close languages to guide the training process on a task: a generic model is trained on groups of languages, further refined with models by subgroups and finally fine-tuned on individual languages.",
"cite_spans": [
{
"start": 80,
"end": 101,
"text": "(Braud et al., 2017a;",
"ref_id": "BIBREF4"
},
{
"start": 102,
"end": 120,
"text": "Chen et al., 2020)",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 510,
"text": "(Nooralahzadeh et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 511,
"end": 533,
"text": "Tarunesh et al., 2021)",
"ref_id": null
},
{
"start": 585,
"end": 610,
"text": "(Dehouck and Denis, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The 2021 shared task provides 16 corpora annotated either with discourse boundaries (13) or discourse connectives in the case of PDTB corpora (3), with the RST Farsi corpus as a surprise dataset. This covers 11 different languages, mostly indoeuropean languages, and with a majority of european languages: 3 romance (Spanish, French, Portuguese), 3 germanic (English, German, Dutch), Russian, the only non indo-european being Turkish, Basque, and Mandarin. Some of the datasets depend on licences for the underlying text corpus, and are not freely available. We had licences for all of them except the Mandarin corpus (zho.pdtb.cdtb), provided by the organizers for the evaluation of the task. Except Farsi, all the datasets were present in the DISRPT 2019 shared task, but the russian : Descriptions of all corpora, according to the underlying theoretical framework. The tasks consist in finding connectives in the PDTB datasets, or the Elementary Discourse Units (segments) in RST and SDRT datasets. For each corpora are listed the number of documents in each split, the number of sentences and annotated units (connective tokens or segment boundaries in the training set, and whether the gold sentences were manually annotated or given by a parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "dataset has been extended since, and some corpora without gold syntax annotations have been reparsed with Stanza or Spacy to provide morpho-syntaxic information for the sentence-internal subtasks. All tasks are considered as sequence tagging, and annotated as such: for segmentation, each token is marked as being a segment boudary or not, and for connectives, which can span multiple tokens, the annotation follows the BIO convention with three labels Begin/Inside/Out for each token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Datasets are not homogenous, as they were annotated along different principles based on three competing theoretical frameworks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Rhetorical Structure Theory (Mann and Thompson, 1988) , which assumes a linear segmentation of documents in discourse units (no overlaps), which are then related in constituant tree structures. This is followed in the majority of corpora (11).",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Segmented Discrouse Representation Theory (Asher and Lascarides, 2003) , which allows for embedded segments, and were linearized here for homogeneity of the task: a segment embedded in another one was re-annotated as forming 3 three segments. This is the case of two corpora.",
"cite_spans": [
{
"start": 44,
"end": 72,
"text": "(Asher and Lascarides, 2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "\u2022 Penn Discourse Treebank (Prasad et al., 2008) , which annotates discourse connectives and their arguments in a discourse relation. This gives rise to a different annotation scheme as noted above, as the task is only to locate the connective. Table 1 presents the size of all corpora, separated by theoretical framework, and expressed in number of documents, number of sentences, and number of discourse units (segments or connectives). Note that the different corpora greatly vary in sizes and annotations. One dataset is annotated on chat conversations (STAC), while all the others are on written text, mostly news or encyclopedic.",
"cite_spans": [
{
"start": 26,
"end": 47,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 244,
"end": 251,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Experimental results cannot be compared to previous multi-lingual segmentation efforts (Braud et al., 2017b) , because some of the corpora have been revised (Gum, RRT) or not taken entirely (CSTN), and some have been added (TDB, PRSTC), but should be quite close to the DISRPT 2019 evaluation, as only the Russian and the GUM corpora have been extended (and there is an additional dataset).",
"cite_spans": [
{
"start": 87,
"end": 108,
"text": "(Braud et al., 2017b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "More details can be found about all datasets in the following publications: English RSTDT (Carlson et al., 2001 ), PDTB (Prasad et al., 2008) , SDRT-STAC (Asher et al., 2016) and GUM (Zeldes, 2016) , Spanish RST (2)(da Cunha et al., 2011; Cao et al., 2018) Mandarin Chinese (Zhou et al., 2014; Cao et al., 2018) , German RST (Stede and Neumann, 2014) , French SDRT-Annodis (Afantenos et al., 2012) , Basque RST (Iruskieta et al., 2013) , Portuguese RST (Cardoso et al., 2011), Russian RST (Pisarevskaya et al., 2017) , Turkish PDTB (Zeyrek et al., 2013 ) Dutch RST (Redeker et al., 2012) and Persian RST (Shahmohammadi et al., 2021) .",
"cite_spans": [
{
"start": 90,
"end": 111,
"text": "(Carlson et al., 2001",
"ref_id": "BIBREF9"
},
{
"start": 120,
"end": 141,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF26"
},
{
"start": 154,
"end": 174,
"text": "(Asher et al., 2016)",
"ref_id": "BIBREF1"
},
{
"start": 183,
"end": 197,
"text": "(Zeldes, 2016)",
"ref_id": "BIBREF39"
},
{
"start": 274,
"end": 293,
"text": "(Zhou et al., 2014;",
"ref_id": "BIBREF42"
},
{
"start": 294,
"end": 311,
"text": "Cao et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 325,
"end": 350,
"text": "(Stede and Neumann, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 373,
"end": 397,
"text": "(Afantenos et al., 2012)",
"ref_id": null
},
{
"start": 411,
"end": 435,
"text": "(Iruskieta et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 489,
"end": 516,
"text": "(Pisarevskaya et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 532,
"end": 552,
"text": "(Zeyrek et al., 2013",
"ref_id": "BIBREF41"
},
{
"start": 565,
"end": 587,
"text": "(Redeker et al., 2012)",
"ref_id": "BIBREF28"
},
{
"start": 604,
"end": 632,
"text": "(Shahmohammadi et al., 2021)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In this paper we want to leverage combinations of multiple datasets for training, not only with corpora for the same language and task, but also with languages from the same families.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We started from the architecture that showed the best results on almost all languages and configurations at DISRPT 2019, namely (Muller et al., 2019) , which is built around BERT (Devlin et al., 2019) , a contextual language model that is easy to fine-tune on sequence tagging problems. The original architecture combined BERT contextual embeddings to the output of CNN filters over characters of each word piece, that were then fed to single-layer BiL-STM layer for the final prediction. The model is initialized with the multilingual BERT model, then fine-tune on all corpora separately as sequence tagging tasks. The original implementation used the AllenNLP library (Gardner et al., 2017) , and so does our implementation.",
"cite_spans": [
{
"start": 128,
"end": 149,
"text": "(Muller et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 179,
"end": 200,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 670,
"end": 692,
"text": "(Gardner et al., 2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Base architecture",
"sec_num": "4.1"
},
{
"text": "Since BERT has a limitation on the number of word pieces it can take as input, a preprocessing step must be taken for document-level segmentation. In (Muller et al., 2019) , the core-nlp library was used to predict sentence boundaries, and use this information, while we used the more recent Stanza library by the same team for that purpose (Qi et al., 2020) .",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Muller et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 341,
"end": 358,
"text": "(Qi et al., 2020)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Base architecture",
"sec_num": "4.1"
},
{
"text": "We explored potential improvements for that architecture, swapping the multi-lingual pretrained language model XLM (Conneau and Lample, 2019) , or adding another layer to the BiLSTM stage. The final configuration was chosen based on preliminary experiments on some of the datasets, evaluated on their respective development sets.",
"cite_spans": [
{
"start": 115,
"end": 141,
"text": "(Conneau and Lample, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Base architecture",
"sec_num": "4.1"
},
{
"text": "They showed that XLM didn't help, but the extra layer of LSTM could. Changes to other hyperparameters didn't improve these preliminary results so we kept them as in the original model. Details of the parameters can be found in the declarative con- fig file of ",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 259,
"text": "fig file of",
"ref_id": null
}
],
"eq_spans": [],
"section": "Base architecture",
"sec_num": "4.1"
},
{
"text": "Since the shared task involves multiple datasets with the same language (2 for Spanish RST, 2 for English RST), we assumed it would be beneficial to combine them for training. Datasets in the same language are not necessarily consistent in their annotation, but we hypothetize that they have enough commonalities to help training. We also took inspi-ration from work on multi-lingual syntactic parsing where a lot of corpora follow the same formalism, and where past work has tried to use commonalities between different languages, particularly the approach of (Dehouck and Denis, 2019) in which the phylogenic tree of languages guides the training process: a generic model is trained on groups of languages, further refined in models by subgroups and finally fine-tuned on individual languages. DIS-RPT datasets are not numerous enough to provide a complex tree of languages, but we can still take advantage of the presence of languages that are relatively close: romance languages (3 languages and 4 datasets for segmentation), germanic languages (3 languages and 5 datasets for segmentation). Table 3 : Connective identification results on the development sets, for both sentence (conll) and document (doc).",
"cite_spans": [],
"ref_spans": [
{
"start": 1096,
"end": 1103,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset grouping",
"sec_num": "4.2"
},
{
"text": "The modification of ToNy, the best system from DISRPT 2019, gives us our base system on which we will build with multi-corpora training in a second stage. We report the results of these systems on segmentation in Table 2 , and on discourse connectives identification in Table 3 , with precision, recall and F1 score on the detection of segment boundary tokens, and discourse connectives. A first comparison with respect to each best subsystem from 2019 for the 4 subtasks is given in Table 6 . That means ToNy for segmentation (both intra-sentential and plain), discourse connectives (plain), and Gumdrop for discourse connectives (Conll input). We can see that on average on all datasets, the base system gains +0.5, mostly due to its improvements on the plain document segmentation (connective detection only involves 3 datasets). We left out the surprise dataset for 2021 from that evaluation, since we do not have a comparison point. Note that results on this new dataset are good and consistent with the other corpora. Lower results are obtained on smaller datasets for obvious reasons (Spanish sctb, Chinese Mandarin sctb, and to a lesser extent Basque and French). Models trained without sentence information perform a little worse, as expected, with -1.6% on average, again with wider gaps for small corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 270,
"end": 277,
"text": "Table 3",
"ref_id": null
},
{
"start": 484,
"end": 491,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Base model",
"sec_num": "5.1"
},
{
"text": "We do not show the breakup by dataset for the comparison with DISRPT 2019, but there is a lot of variances in results, with differences ranging from -4.5 (French Annodis) to +6.85 (Mandarin SCTB), and not necessarily only on small corpora. Table 4 : Intra sentential results on romance development datasets, with different training setups: SPA means the grouping of both spanish datasets for training, SPO the grouping of spanish and portuguese data, ROM the addition of French to the group. FT means a model fine-tuning the SPO model. Lines with \"self\" are just copied from the basic evaluation (training on the dataset only) for comparison. In bold are indicated the best F1 results per dataset on their development set, and the corresponding model was thus kept for the final evaluation.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 247,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Base model",
"sec_num": "5.1"
},
{
"text": "As shown above, a lot of smaller datasets have lower results than larger ones, which is to be ex-pected. We present here the result when applying the strategy described in Section 4.2. We tried it on two groups of languages: romance languages, and germanic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dataset training",
"sec_num": "5.2"
},
{
"text": "For romance languages, since there are two spanish RST corpora, we grouped them to trained a \"spanish\" more generic model, then we trained a model with all Spanish and Portuguese data, then a generic romance model by including also French. We then used those models for predictions on the datasets respective development part. We did something similar with germanic languages, grouping all English datasets into one, joining the Dutch and German datasets into another one, and finally training a generic germanic model on all of them. Following a procedure similar to what was done in (Dehouck and Denis, 2019) , we also fine-tuned some of these models on the individual datasets before using them for prediction. Lack of time during the campaign prevented us from trying all combinations and all datasets like this, but we tested this on all Spanish and English datasets, respectively fine-tuning the global Spanish-Portuguese on Spanish and Portuguese datasets (since it showed a good compromise on the dev sets) and the global English model on all English datasets. Due to time constraints, we tested this only on one type of input, the sentence level (conll files).",
"cite_spans": [
{
"start": 585,
"end": 610,
"text": "(Dehouck and Denis, 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-dataset training",
"sec_num": "5.2"
},
{
"text": "Results are presented in Tables 4 and 5 . For the final evaluation, we kept for each dataset the model that performed better on the dataset development test.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 39,
"text": "Tables 4 and 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Multi-dataset training",
"sec_num": "5.2"
},
{
"text": "For the final evaluation of the campaign, every team provided their code and instructions for reproducing the experiments, and one member of the organization team reproduced entirely the experiments of one model they were not involved with (two of the four teams involved organization members, including the present system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final evaluation",
"sec_num": "5.3"
},
{
"text": "We reported the official scores on the test sets in Tables 7 for segmentation and 8 for connective detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final evaluation",
"sec_num": "5.3"
},
{
"text": "Overall our system is ranked 3rd out of 4, with results very close to the 2nd-ranked system (Segformers), with only 0.15% difference on average for treebanked data segmentation, and 0.45% on plain data segmentation. The gap with the first ranked system (DiscoDisco) is 0.7% on average on treebanked segmentation and 1.28% on plain text segmentation. Our system performed less well on connective detection, especially with respect to models taking dependency between labels into account (such as a CRF in the case of DiscoDisco): about 4.5% less than Segformers and 6% less than DiscoDisco, mostly due to lower results on the Mandarin dataset. 3 It is to be noted that we achieved our best results in relation to the other systems with datasets used in cross-training with similar languages (see Sections 4.2 and 5.2), and for which we observed on the development data that it had an impact: German and Spanish corpora, in the case of treebanked data (conll), since we didn't have time to try this strategy Table 6 : Mean comparison of our base system and our improved system with respect to the best Disrrpt 2019 system for each sub task, and the mean on all 16 datasets. This is the only evaluation we made on the test set prior to the official evaluation. Note that grouped training was tested only on conll segmentation, so the other scores are just copied from the base system. on plain documents. We have the best scores on these languages, but note that cross-training cannot really explain the good results on German, since results are surprisingly similar between treebanked and plain data.",
"cite_spans": [],
"ref_spans": [
{
"start": 1006,
"end": 1013,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final evaluation",
"sec_num": "5.3"
},
{
"text": "We presented an approach for discourse segmentation and discourse connective identification, both at the sentence and document level, in a multi-lingual and multi-formalism context. Building on the successful architecture from the 2019 DISRPT shared task, we leverage datasets in the same or similar languages to augment training data and improve on the best systems from the previous campaign, on 3 of the 4 sub-tasks. While below the best system which uses a rich set of features over a similar architecture, we still manage to have the best scores on some of the languages where we experimented with cross-lingual training: German and Spanish for sentence-internal segmentation. Due to time constraints, we could not fully explore all potentially useful language combinations and fine-tuning on specific datasets, that could help improve on the tasks, and give insights on how different languages help each other addressing the discourse segmentation problem. Further progress on multi-lingual embeddings or alignments of different embeddings could be a source of future investigations, as well as more elaborate multi-lingual training procedures. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Note that authors Philippe Muller and Chlo\u00e9 Braud were part of the organization of the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gitlab.irit.fr/melodi/ andiamo/discoursesegmentation/discut",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Full results for all systems are not shown for space constraint reasons, but are displayed on the Shared task website at https://sites.google.com/georgetown. edu/disrpt2021/results and are summarized in the introductory paper to the Shared Task proceedings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by the ANR (ANR-19-PI3A-0004) through the AI Interdisci-plinary Institute, ANITI, as a part of France's \"Investing for the Future -PIA3\" program, and through the project SLANT (ANR-19-CE23-0022) and Quantum (ANR-19-CE23-0025).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Marianne Vergez-Couret, and Laure Vieu. 2012. An empirical resource for discovering cognitive principles of discourse organisation: the annodis corpus",
"authors": [
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Myriam",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Fabre",
"suffix": ""
},
{
"first": "Lydia-Mai",
"middle": [],
"last": "Ho-Dac",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"Le"
],
"last": "Draoulec",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Marie-Paule",
"middle": [],
"last": "Pery-Woodley",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Josette",
"middle": [],
"last": "Rebeyrolles",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Tanguy",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stergos Afantenos, Nicholas Asher, Farah Benamara, Myriam Bras, C\u00e9cile Fabre, Lydia-Mai Ho-Dac, Anne Le Draoulec, Philippe Muller, Marie-Paule Pery-Woodley, Laurent Pr\u00e9vot, Josette Rebeyrolles, Ludovic Tanguy, Marianne Vergez-Couret, and Laure Vieu. 2012. An empirical resource for discov- ering cognitive principles of discourse organisation: the annodis corpus. In Proceedings of LREC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discourse structure and dialogue acts in multiparty dialogue: the stac corpus",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher, Julie Hunter, Mathieu Morey, Farah Benamara, and Stergos Afantenos. 2016. Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In Proceedings of LREC.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Logics of Conversation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual and cross-genre discourse unit segmentation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Bourgonje",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Sch\u00e4fer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "105--114",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2714"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Bourgonje and Robin Sch\u00e4fer. 2019. Multi- lingual and cross-genre discourse unit segmentation. In Proceedings of the Workshop on Discourse Rela- tion Parsing and Treebanking 2019, pages 105-114, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-lingual RST discourse parsing",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "292--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Maximin Coavoux, and Anders S\u00f8gaard. 2017a. Cross-lingual RST discourse parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 292-304, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-lingual and cross-domain discourse segmentation of entire documents",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "237--243",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2037"
]
},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Oph\u00e9lie Lacroix, and Anders S\u00f8gaard. 2017b. Cross-lingual and cross-domain discourse segmentation of entire documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 237-243, Vancouver, Canada. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Does syntax help discourse segmentation? not so much",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2432--2442",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1258"
]
},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Oph\u00e9lie Lacroix, and Anders S\u00f8gaard. 2017c. Does syntax help discourse segmentation? not so much. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2432-2442, Copenhagen, Denmark. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The RST spanish-chinese treebank",
"authors": [
{
"first": "Shuyuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Iria Da Cunha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iruskieta",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of LAW-MWE-CxG",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shuyuan Cao, Iria da Cunha, and Mikel Iruskieta. 2018. The RST spanish-chinese treebank. In Proceedings of LAW-MWE-CxG.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "CSTNews -a discourse-annotated corpus for single and multi-document summarization of news texts in Brazilian Portuguese",
"authors": [
{
"first": "C",
"middle": [
"F"
],
"last": "Paula",
"suffix": ""
},
{
"first": "Erick",
"middle": [
"G"
],
"last": "Cardoso",
"suffix": ""
},
{
"first": "Mar\u00eda Luc\u00eda Castro",
"middle": [],
"last": "Maziero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jorge",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Eloize",
"suffix": ""
},
{
"first": "Ariani",
"middle": [],
"last": "Seno",
"suffix": ""
},
{
"first": "Lucia",
"middle": [
"Helena"
],
"last": "Di Felippo",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Machado Rino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das Gracas Volpe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "A",
"middle": [
"S"
],
"last": "Thiago",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pardo",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 3rd RST Brazilian Meeting",
"volume": "",
"issue": "",
"pages": "88--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula C.F. Cardoso, Erick G. Maziero, Mar\u00eda Luc\u00eda Castro Jorge, Eloize R.M. Seno, Ariani Di Fe- lippo, Lucia Helena Machado Rino, Maria das Gra- cas Volpe Nunes, and Thiago A. S. Pardo. 2011. CSTNews -a discourse-annotated corpus for single and multi-document summarization of news texts in Brazilian Portuguese. In Proceedings of the 3rd RST Brazilian Meeting, pages 88-105.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Second SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged cor- pus in the framework of Rhetorical Structure Theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Modeling discourse structure for document-level neural machine translation",
"authors": [
{
"first": "Junxuan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiarui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chulun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the First Workshop on Automatic Simultaneous Translation",
"volume": "",
"issue": "",
"pages": "30--36",
"other_ids": {
"DOI": [
"10.18653/v1/2020.autosimtrans-1.5"
]
},
"num": null,
"urls": [],
"raw_text": "Junxuan Chen, Xiang Li, Jiarui Zhang, Chulun Zhou, Jianwei Cui, Bin Wang, and Jinsong Su. 2020. Mod- eling discourse structure for document-level neural machine translation. In Proceedings of the First Workshop on Automatic Simultaneous Translation, pages 30-36, Seattle, Washington. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "On the development of the RST Spanish Treebank",
"authors": [
{
"first": "Juan-Manuel",
"middle": [],
"last": "Iria Da Cunha",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sierra",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifth Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iria da Cunha, Juan-Manuel Torres-Moreno, and Ger- ardo Sierra. 2011. On the development of the RST Spanish Treebank. In Proceedings of the Fifth Lin- guistic Annotation Workshop, LAW.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Phylogenic multi-lingual dependency parsing",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Dehouck",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Denis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "192--203",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1017"
]
},
"num": null,
"urls": [],
"raw_text": "Mathieu Dehouck and Pascal Denis. 2019. Phylo- genic multi-lingual dependency parsing. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 192-203, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The utility of parse-derived features for automatic discourse segmentation",
"authors": [
{
"first": "Seeger",
"middle": [],
"last": "Fisher",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "488--495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seeger Fisher and Brian Roark. 2007. The utility of parse-derived features for automatic discourse seg- mentation. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 488-495, Prague, Czech Republic. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Allennlp: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "HILDA: A discourse parser using support vector machine classification",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "David",
"middle": [
"A"
],
"last": "",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue Discourse",
"volume": "1",
"issue": "3",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Hernault, Helmut Prendinger, David A. duVerle, and Mitsuru Ishizuka. 2010. HILDA: A discourse parser using support vector machine classification. Dialogue Discourse, 1(3):1-33.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The RST Basque Treebank: an online search interface to check rhetorical relations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mar\u00eda",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "Aranzabe",
"suffix": ""
},
{
"first": "Itziar",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Gonzalez-Dios",
"suffix": ""
},
{
"first": "Oier",
"middle": [],
"last": "Lersundi",
"suffix": ""
},
{
"first": "Calle",
"middle": [],
"last": "Lopez De La",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 4th Workshop RST and Discourse Studies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Iruskieta, Mar\u00eda J. Aranzabe, Arantza Diaz de Ilarraza, Itziar Gonzalez-Dios, Mikel Lersundi, and Oier Lopez de la Calle. 2013. The RST Basque Tree- bank: an online search interface to check rhetorical relations. In Proceedings of the 4th Workshop RST and Discourse Studies.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "CODRA: A novel discriminative framework for rhetorical analysis",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "41",
"issue": "3",
"pages": "385--435",
"other_ids": {
"DOI": [
"10.1162/COLI_a_00226"
]
},
"num": null,
"urls": [],
"raw_text": "Shafiq Joty, Giuseppe Carenini, and Raymond T. Ng. 2015. CODRA: A novel discriminative framework for rhetorical analysis. Computational Linguistics, 41(3):385-435.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Text segmentation by cross segment attention",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Lukasik",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Dadachev",
"suffix": ""
},
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Gon\u00e7alo",
"middle": [],
"last": "Sim\u00f5es",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4707--4716",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.380"
]
},
"num": null,
"urls": [],
"raw_text": "Michal Lukasik, Boris Dadachev, Kishore Papineni, and Gon\u00e7alo Sim\u00f5es. 2020. Text segmentation by cross segment attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 4707-4716, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Rhetorical Structure Theory: Toward a functional theory of text organization. Text",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "8",
"issue": "",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8:243-281.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The rhetorical parsing of unrestricted texts: a surface-based approach",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "3",
"pages": "395--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 2000. The rhetorical parsing of unre- stricted texts: a surface-based approach. Computa- tional Linguistics, 26(3):395-448.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full documents",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2715"
]
},
"num": null,
"urls": [],
"raw_text": "Philippe Muller, Chlo\u00e9 Braud, and Mathieu Morey. 2019. ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full docu- ments. In Proceedings of the Workshop on Dis- course Relation Parsing and Treebanking 2019, pages 115-124, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Zero-shot cross-lingual transfer with meta learning",
"authors": [
{
"first": "Farhad",
"middle": [],
"last": "Nooralahzadeh",
"suffix": ""
},
{
"first": "Giannis",
"middle": [],
"last": "Bekoulis",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Augenstein",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4547--4562",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.368"
]
},
"num": null,
"urls": [],
"raw_text": "Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, and Isabelle Augenstein. 2020. Zero-shot cross-lingual transfer with meta learning. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 4547-4562, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards building a discourseannotated corpus of russian",
"authors": [
{
"first": "Dina",
"middle": [],
"last": "Pisarevskaya",
"suffix": ""
},
{
"first": "Margarita",
"middle": [],
"last": "Ananyeva",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Kobozeva",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nasedkin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nikiforova",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Pavlova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shelepov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Computational Linguistics and Intellectual Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dina Pisarevskaya, Margarita Ananyeva, Maria Kobozeva, A Nasedkin, S Nikiforova, I Pavlova, and A Shelepov. 2017. Towards building a discourse- annotated corpus of russian. In Proceedings of the International Conference on Computational Lin- guistics and Intellectual Technologies \"Dialogue\".",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Penn Discourse Treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "The Sixth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2961--2968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In The Sixth International Conference on Language Resources and Evaluation, pages 2961 -2968, Mar- rakech, Morocco. ELRA.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Stanza: A Python natural language processing toolkit for many human languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuhui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Bolton",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Multilayer discourse annotation of a Dutch text corpus",
"authors": [
{
"first": "Gisela",
"middle": [],
"last": "Redeker",
"suffix": ""
},
{
"first": "Ildik\u00f3",
"middle": [],
"last": "Berzl\u00e1novich",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gisela Redeker, Ildik\u00f3 Berzl\u00e1novich, Nynke van der Vliet, Gosse Bouma, and Markus Egg. 2012. Multi- layer discourse annotation of a Dutch text corpus. In Proceedings of LREC.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Persian rhetorical structure theory",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Shahmohammadi",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Veisi",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Darzi",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Shahmohammadi, Hadi Veisi, and Ali Darzi. 2021. Persian rhetorical structure theory. CoRR, abs/2106.13833.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sentence level discourse parsing using syntactic and lexical information",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "228--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical infor- mation. In Proceedings of the 2003 Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 228-235.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Potsdam Commentary Corpus 2.0: Annotation for discourse research",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Stede and Arne Neumann. 2014. Potsdam Commentary Corpus 2.0: Annotation for discourse research. In Proceedings of LREC.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Meta-learning for effective multi-task and multilingual modelling",
"authors": [],
"year": null,
"venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
"volume": "",
"issue": "",
"pages": "3600--3612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meta-learning for effective multi-task and multilin- gual modelling. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3600-3612, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A syntactic and lexical-based discourse segmenter",
"authors": [
{
"first": "Milan",
"middle": [],
"last": "Tofiloski",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Brooke",
"suffix": ""
},
{
"first": "Maite",
"middle": [],
"last": "Taboada",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milan Tofiloski, Julian Brooke, and Maite Taboada. 2009. A syntactic and lexical-based discourse seg- menter. In Proceedings of the ACL-IJCNLP 2009",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Conference Short Papers",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "77--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference Short Papers, pages 77-80, Suntec, Sin- gapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Toward fast and accurate neural discourse segmentation",
"authors": [
{
"first": "Yizhong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jingfeng",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "962--967",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1116"
]
},
"num": null,
"urls": [],
"raw_text": "Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmen- tation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962-967, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Discourse-aware neural extractive text summarization",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5021--5031",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.451"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text sum- marization. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5021-5031, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Gum-Drop at the DISRPT2019 shared task: A model stacking approach to discourse unit segmentation and connective detection",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yilun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Siyao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Mackenzie",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019",
"volume": "",
"issue": "",
"pages": "133--143",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2717"
]
},
"num": null,
"urls": [],
"raw_text": "Yue Yu, Yilun Zhu, Yang Liu, Yan Liu, Siyao Peng, Mackenzie Gong, and Amir Zeldes. 2019. Gum- Drop at the DISRPT2019 shared task: A model stacking approach to discourse unit segmentation and connective detection. In Proceedings of the Workshop on Discourse Relation Parsing and Tree- banking 2019, pages 133-143, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The GUM corpus: Creating multilayer resources in the classroom",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amir Zeldes. 2016. The GUM corpus: Creating multi- layer resources in the classroom. In Proceedings of LREC.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The DIS-RPT 2019 shared task on elementary discourse unit segmentation and connective detection",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Zeldes",
"suffix": ""
},
{
"first": "Debopam",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Erick",
"middle": [
"Galani"
],
"last": "Maziero",
"suffix": ""
},
{
"first": "Juliano",
"middle": [],
"last": "Antonio",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2713"
]
},
"num": null,
"urls": [],
"raw_text": "Amir Zeldes, Debopam Das, Erick Galani Maziero, Ju- liano Antonio, and Mikel Iruskieta. 2019. The DIS- RPT 2019 shared task on elementary discourse unit segmentation and connective detection. In Proceed- ings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 97-104, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Turkish discourse bank: Porting a discourse annotation style to a morphologically rich language",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Zeyrek",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Demirsahin Is\u0131n",
"suffix": ""
},
{
"first": "Ruket",
"middle": [],
"last": "Sevdik-\u00c7all\u0131",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u00c7ak\u0131c\u0131",
"suffix": ""
}
],
"year": 2013,
"venue": "Dialogue and Discourse",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Zeyrek, Demirsahin Is\u0131n, A. Sevdik-\u00c7all\u0131, and Ruket \u00c7ak\u0131c\u0131. 2013. Turkish discourse bank: Port- ing a discourse annotation style to a morphologically rich language. Dialogue and Discourse.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Chinese discourse treebank 0.5 LDC2014T21. Web Download. Philadelphia: Linguistic Data Consortium",
"authors": [
{
"first": "Yuping",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jill",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuping Zhou, Jill Lu, Jennifer Zhang, and Nian- wen Xue. 2014. Chinese discourse treebank 0.5 LDC2014T21. Web Download. Philadelphia: Lin- guistic Data Consortium.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "Segmentation results on the development sets, for both sentence (conll) and document (doc) levels.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF7": {
"text": "Intra sentential results on germanic development datasets, with different training setups: GD means the grouping of German and Dutch for training, GER the grouping of English, German and Dutch, and ENG the grouping of all 3 English datasets. FT means a model fine-tuning the global ENG model on the specific corpus training set. Lines with \"self\" are just copied from the basic evaluation (training on the dataset only) for comparison. In bold are indicated the best F1 results per dataset on their development set.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF9": {
"text": "96.60 95.63 eng.rst.gum 92.76 89.54 91.13 eng.rst.rstdt 93.39 94.50 93.94 eng.sdrt.stac 85.30 87.01 86.14 eus.rst.ert 91.45 83.78 87.45 fas.rst.prstc 93.59 89.40 91.45 fra.sdrt.annodis 89.90 86.41 88.12 nld.rst.nldt 94.35 93.79 94.07 por.rst.cstn 93.36 91.83 92.59 rus.rst.rrt 83.60 84.01 83.80 spa.rst.rststb 92.19 89.78 90.97 spa.rst.sctb 78.65 89.88 83.89 zho.rst.sctb 68.11 75.00 71.39 mean 88.56 88.58 88.51",
"content": "<table><tr><td>treebanked</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>deu.rst.pcc</td><td colspan=\"3\">98.91 92.52 95.61</td></tr><tr><td>eng.rst.gum</td><td colspan=\"3\">93.27 93.65 93.46</td></tr><tr><td>eng.rst.rstdt</td><td colspan=\"3\">96.16 95.99 96.08</td></tr><tr><td>eng.sdrt.stac</td><td colspan=\"3\">97.41 92.37 94.82</td></tr><tr><td>eus.rst.ert</td><td colspan=\"3\">90.04 83.11 86.44</td></tr><tr><td>fas.rst.prstc</td><td colspan=\"3\">93.54 90.75 92.12</td></tr><tr><td colspan=\"4\">fra.sdrt.annodis 87.26 88.67 87.96</td></tr><tr><td>nld.rst.nldt</td><td colspan=\"3\">94.15 95.27 94.71</td></tr><tr><td>por.rst.cstn</td><td colspan=\"3\">90.31 94.44 92.33</td></tr><tr><td>rus.rst.rrt</td><td colspan=\"3\">88.19 81.10 84.50</td></tr><tr><td>spa.rst.rststb</td><td colspan=\"3\">92.04 93.04 92.54</td></tr><tr><td>spa.rst.sctb</td><td colspan=\"3\">85.39 90.48 87.86</td></tr><tr><td>zho.rst.sctb</td><td colspan=\"3\">92.48 73.21 81.73</td></tr><tr><td>mean</td><td colspan=\"3\">92.24 89.58 90.78</td></tr><tr><td>plain</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>deu.rst.pcc</td><td>94.67</td><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF10": {
"text": "Final official segmentation results on the test set, reproduced by the organizers. In bold, F1 scores for which our system has the best performance of the shared task.",
"content": "<table><tr><td>treebanked</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>PDTB</td><td colspan=\"3\">93.32 88.67 90.94</td></tr><tr><td>TDB</td><td colspan=\"3\">90.55 86.93 88.70</td></tr><tr><td>CDTB</td><td colspan=\"3\">84.43 66.03 74.10</td></tr><tr><td>mean</td><td colspan=\"3\">89.43 80.54 84.58</td></tr><tr><td>plain</td><td/><td/><td/></tr><tr><td>PDTB</td><td colspan=\"3\">88.84 92.09 90.43</td></tr><tr><td>TDB</td><td colspan=\"3\">90.12 88.10 89.10</td></tr><tr><td>CDTB</td><td colspan=\"3\">77.40 72.44 74.83</td></tr><tr><td>mean</td><td colspan=\"3\">85.45 84.21 84.79</td></tr></table>",
"html": null,
"type_str": "table",
"num": null
},
"TABREF11": {
"text": "Final official connective detection results on the test set, reproduced by the organizers.",
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null
}
}
}
}