ACL-OCL / Base_JSON /prefixC /json /codi /2021.codi-main.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:19.536790Z"
},
"title": "DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ke",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nancy",
"middle": [
"F"
],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text discourse parsing weighs importantly in understanding information flow and argumentative structure in natural language, making it beneficial for downstream tasks. While previous work significantly improves the performance of RST discourse parsing, they are not readily applicable to practical use cases: (1) EDU segmentation is not integrated into most existing tree parsing frameworks, thus it is not straightforward to apply such models on newly-coming data. (2) Most parsers cannot be used in multilingual scenarios, because they are developed only in English. (3) Parsers trained from single-domain treebanks do not generalize well on out-of-domain inputs. In this work, we propose a document-level multilingual RST discourse parsing framework, which conducts EDU segmentation and discourse tree parsing jointly. Moreover, we propose a cross-translation augmentation strategy to enable the framework to support multilingual parsing and improve its domain generality. Experimental results show that our model achieves state-of-the-art performance on document-level multilingual RST parsing in all sub-tasks. \u2020 Equal Contribution. *Corresponding Author. e1[ The European Community's consumer price index rose a provisional 0.6% in September from August ] e2[ and was up 5.3% from September 1988, ] e3[ according to Eurostat, the EC's statistical agency. ] e4[ The month-to-month rise in the index was the largest since April, ] e5[ Eurostat said. ]",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Text discourse parsing weighs importantly in understanding information flow and argumentative structure in natural language, making it beneficial for downstream tasks. While previous work significantly improves the performance of RST discourse parsing, they are not readily applicable to practical use cases: (1) EDU segmentation is not integrated into most existing tree parsing frameworks, thus it is not straightforward to apply such models on newly-coming data. (2) Most parsers cannot be used in multilingual scenarios, because they are developed only in English. (3) Parsers trained from single-domain treebanks do not generalize well on out-of-domain inputs. In this work, we propose a document-level multilingual RST discourse parsing framework, which conducts EDU segmentation and discourse tree parsing jointly. Moreover, we propose a cross-translation augmentation strategy to enable the framework to support multilingual parsing and improve its domain generality. Experimental results show that our model achieves state-of-the-art performance on document-level multilingual RST parsing in all sub-tasks. \u2020 Equal Contribution. *Corresponding Author. e1[ The European Community's consumer price index rose a provisional 0.6% in September from August ] e2[ and was up 5.3% from September 1988, ] e3[ according to Eurostat, the EC's statistical agency. ] e4[ The month-to-month rise in the index was the largest since April, ] e5[ Eurostat said. ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is one of the predominant theories for discourse analysis, where a document is represented by a constituency tree with discourserelated annotation. As illustrated in Figure 1 , the paragraph is split to segments named Elementary Discourse Units (EDUs), as the leaf nodes of the tree, and they are further connected by rhetorical relations (e.g., Elaboration, Attribution) to form larger text spans until the entire document is included. The spans are further categorized to Nucleus (the core part) or Satellite (the subordinate part) based on their relative importance in the rhetorical relations. Thus, document-level RST discourse parsing consists of four sub-tasks: EDU segmentation, tree structure construction, nuclearity determination, and relation classification. Since discourse parsing provides structural information of the narrative flow, downstream natural language processing applications, such as reading comprehension (Gao et al., 2020) , sentiment analysis (Bhatia et al., 2015) , and text summarization (Liu and Chen, 2019) , can benefit from incorporating semantic-related information.",
"cite_spans": [
{
"start": 34,
"end": 59,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF28"
},
{
"start": 993,
"end": 1011,
"text": "(Gao et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 1033,
"end": 1054,
"text": "(Bhatia et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 1080,
"end": 1100,
"text": "(Liu and Chen, 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RST discourse parsing has been an active research area, especially since neural approaches and large-scale pre-trained language models were introduced. On the test set of the English RST benchmark (Carlson et al., 2002) , the performance of automatic parsing is approaching that of human annotators. However, compared with other offthe-shelf text processing applications like machine translation, RST parsers are still not readily applicable to massive and diverse samples due to the following challenges: (1) Most parsers take EDU segmentation as a pre-requisite data preparation step, and only conduct evaluations on samples with gold EDU segmentation. Thus it is not straightforward to utilize them to parse raw documents.",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Parsers are primarily optimized and evaluated in English, and are not applicable on multilingual scenarios/tasks. Human annotation under the RST scheme is labor-intensive and requires specialized linguistic knowledge, resulting in a shortage of training data especially in low resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(3) Data sparsity also leads to limited generalization capabilities in terms of topic domain and language variety, as the monolingual discourse treebanks usually concentrate on a specific domain. For instance, the English RST corpus is comprised of Wall Street Journal news articles, thus its parser might not perform well on scientific articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, to tackle the aforementioned challenges, we propose a joint framework for documentlevel multilingual RST discourse analysis. To achieve parsing from scratch, we enhance a topdown discourse parsing model with joint learning of EDU segmentation. Since the well-annotated RST treebanks in different languages share the same underlying linguistic theory, data-driven approaches can benefit from joint learning on multilingual RST resources (Braud et al., 2017a ). Inspired by the success of mixed multilingual training , we further propose a cross-translation data augmentation strategy to improve RST parsing in both language and domain coverage.",
"cite_spans": [
{
"start": 451,
"end": 471,
"text": "(Braud et al., 2017a",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct extensive experiments on RST treebanks from six languages: English, Spanish, Basque, German, Dutch, and Portuguese. Experimental results show that our framework achieves state-of-the-art performance in different languages and on all sub-tasks. We further investigate the model's zero-shot generalization capability, by assessing its performance via language-level cross validation. Additionally, the proposed framework can be readily extended to other languages with existing treebanks. The pre-trained model is built as an off-the-shelf application, and can be applied in an end-to-end manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "RST Discourse Parsing Discourse structures describe the organization of documents/sentences in terms of rhetorical/discourse relations. The Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) and the Penn Discourse TreeBank (PDTB) (Prasad et al., 2008) are the two most prominent theories of discourse analysis, where they are at doc-ument level and sentence level respectively. The structure-aware document analysis has shown to be useful for downstream natural language processing tasks, such as sentiment analysis (Bhatia et al., 2015) and reading comprehension (Gao et al., 2020) . Many studies focused on developing automatic computational solutions for discourse parsing. Statistical approaches utilized various linguistic characteristics such as N -gram and lexical features, syntactic and organizational features (Sagae, 2009; Hernault et al., 2010; Li et al., 2014; Heilman and Sagae, 2015) , and had obtained substantial improvement on the English RST-DT benchmark (Carlson et al., 2002) . Neural networks have been making inroads into discourse analysis frameworks, such as attention-based hierarchical encoding (Li et al., 2016) and integrating neural-based syntactic features into a transition-based parser (Yu et al., 2018) . explored encoderdecoder neural architectures on sentence-level discourse analysis, with a top-down parsing procedure. Recently, pre-trained language models were introduced to document-level discourse parsing, and boosted the overall performance .",
"cite_spans": [
{
"start": 174,
"end": 199,
"text": "(Mann and Thompson, 1988)",
"ref_id": "BIBREF28"
},
{
"start": 239,
"end": 260,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF36"
},
{
"start": 525,
"end": 546,
"text": "(Bhatia et al., 2015)",
"ref_id": "BIBREF1"
},
{
"start": 573,
"end": 591,
"text": "(Gao et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 829,
"end": 842,
"text": "(Sagae, 2009;",
"ref_id": "BIBREF38"
},
{
"start": 843,
"end": 865,
"text": "Hernault et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 866,
"end": 882,
"text": "Li et al., 2014;",
"ref_id": "BIBREF23"
},
{
"start": 883,
"end": 907,
"text": "Heilman and Sagae, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 983,
"end": 1005,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF6"
},
{
"start": 1131,
"end": 1148,
"text": "(Li et al., 2016)",
"ref_id": "BIBREF22"
},
{
"start": 1228,
"end": 1245,
"text": "(Yu et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Multilingual Parsing Aside from the English treebank, datasets in other languages have also been introduced and studied, such as German (Stede and Neumann, 2014) , Dutch (Redeker et al., 2012) , and Basque (Iruskieta et al., 2013) . The main challenge of multilingual discourse parsing is the sparsity of annotated data. Braud et al. (2017a) conducted a harmonization of discourse treebanks across annotations in different languages, and Iruskieta and Braud (2019) used multilingual word embeddings to train systems on under-resourced languages. Recently, proposed a multilingual RST parser by utilizing cross-lingual language model and EDU segment-level translation, obtaining substantial performance gains.",
"cite_spans": [
{
"start": 136,
"end": 161,
"text": "(Stede and Neumann, 2014)",
"ref_id": "BIBREF40"
},
{
"start": 170,
"end": 192,
"text": "(Redeker et al., 2012)",
"ref_id": "BIBREF37"
},
{
"start": 206,
"end": 230,
"text": "(Iruskieta et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 321,
"end": 341,
"text": "Braud et al. (2017a)",
"ref_id": "BIBREF2"
},
{
"start": 438,
"end": 464,
"text": "Iruskieta and Braud (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "EDU Segmentation EDU segmentation identifies the minimal text spans to be linked by discourse relations. It is the first step in building discourse parsers, and often studied as a separated task in discourse analysis. Existing segmenters on the English discourse corpus achieve sentencelevel results with 95% F1 scores (Li et al., 2018) , while document-level segmentation is more challenging. Muller et al. (2019) proposed a discourse segmenter that supports multiple languages and schemes. Recently, taking segmentation as a se- Figure 2 : The architecture of the proposed joint document-level neural parser. A segmenter is first utilized to predict the EDU breaks, and a hierarchical encoder is used to generate the EDU representations. Then, the pointernetwork-based decoder and the relation classifier predict the tree structure, nuclearity, and rhetorical relations. t, e and h denote input tokens, encoded EDU representations, and decoded hidden states. The stack S is maintained by the decoder to track top-down depth-first span splitting. With each splitting pointer k, sub-spans e i:k and e k+1:j are fed to a classifier \u03a6 for nuclearity and relation determination.",
"cite_spans": [
{
"start": 319,
"end": 336,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 394,
"end": 414,
"text": "Muller et al. (2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 531,
"end": 539,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "quence labeling task was shown to be effective in reaching strong segmentation results. Fusing syntactic features to language models was also introduced (Desai et al., 2020) . In this work, to the best of our knowledge, we are the first to build a joint framework for document-level multilingual RST discourse analysis that supports parsing from scratch, and can be potentially extended to any language by text-level transformation.",
"cite_spans": [
{
"start": 153,
"end": 173,
"text": "(Desai et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this section, we elaborate on the proposed joint multilingual RST discourse parsing framework. We first integrate EDU segmentation into a topdown Transformer-based neural parser, and show how to leverage dynamic loss weights to control the balance of each sub-task. We then propose cross-translation augmentation to improve the multilingual and domain generalization capability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "The neural model consists of an EDU segmenter, a hierarchical encoder, a span splitting decoder for tree construction, and a classifier for nuclearity/relation determination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-based Neural Parser",
"sec_num": "3.1"
},
{
"text": "The EDU segmentation aims to split a document into continuous units and is usually formulated to detect the span breaks. In this work, we conduct it as a sequence labeling task (Muller et al., 2019; Devlin et al., 2019) . Given a document containing n tokens, an embedding layer is employed to generate the token-level representations T = {t 1 , ..., t n }, in particular, a pre-trained language backbone is used to leverage the resourceful prior knowledge. Instead of detecting the beginning of each EDU as in previous work (Muller et al., 2019 ), here we propose to predict both EDU boundaries via tokenlevel classification. In detail, a linear layer is used to predict the type of each token in one EDU span, i.e., at the begin/intermediate/end position. 1 For extensive comparison, we also implement another segmenter by using a pointer mechanism (Vinyals et al., 2015) . Results in Table 3 show that the tokenlevel classification approach consistently produces better performance.",
"cite_spans": [
{
"start": 177,
"end": 198,
"text": "(Muller et al., 2019;",
"ref_id": "BIBREF31"
},
{
"start": 199,
"end": 219,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 525,
"end": 545,
"text": "(Muller et al., 2019",
"ref_id": "BIBREF31"
},
{
"start": 851,
"end": 873,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [
{
"start": 887,
"end": 894,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "EDU Segmentation",
"sec_num": "3.1.1"
},
{
"text": "To obtain EDU representations with both local and global views, spans are hierarchically modeled from token and EDU-level to document-level. For the document containing n tokens, the initial EDU-level representations are calculated by averaging the token embeddings t i:j of each EDU, where i, j are its boundary indices. Then they are fed into a Bidirectional-GRU (Cho et al., 2014) to capture context-aware representations at the document level. Boundary information has been shown to be effective in previous discourse parsing studies , thus we also incorporate boundary embeddings from both ends of each EDU to implicitly exploit the syntactic features such as partof-speech (POS) and sentential information. Then, the ensemble representations are fed to a linear layer, and we obtain the final contextualized EDU representations E = {e 1 , ..., e m }, where m is the total number of EDUs.",
"cite_spans": [
{
"start": 365,
"end": 383,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Encoding",
"sec_num": "3.1.2"
},
{
"text": "The constituency parsing process is to analyze the input by breaking down it into sub-spans also known as constituents. In previous studies , with a generic constituency-based decoding framework, the discourse parsing results of depth-first and breadthfirst manner are similar. Here the decoder builds the tree structure in a top-down depth-first manner. Starting from splitting a span with the entire document, a pointer network iteratively decides the delimitation point to divide a span into two subspans, until it reaches the leaf nodes with only one EDU. As the parsing example illustrated in Figure 2 , a stack S is maintained to ensure the parsing is conducted under the top-down depth-first manner, and it is initialized with the span containing all EDUs e 1:m . At each decoding step, the span e i:j at the head of S is popped to the pointer network to decide the split point k based on the attention mechanism (Bahdanau et al., 2015) .",
"cite_spans": [
{
"start": 921,
"end": 944,
"text": "(Bahdanau et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 598,
"end": 607,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Tree Structure Construction",
"sec_num": "3.1.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s t,u = \u03c3(h t , e u ) for u = i...j (1) a t = softmax(s t ) = exp(s t,u ) j u=i exp(s t,u )",
"eq_num": "(2)"
}
],
"section": "Tree Structure Construction",
"sec_num": "3.1.3"
},
{
"text": "where \u03c3(x, y) is the dot product used as the attention scoring function. The span e i:j is split into two sub-spans e i:k and e k+1:j . The sub-spans that need further processing are pushed to the top of the stack S to maintain depth-first manner. The decoder iteratively parses the spans until S is empty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Structure Construction",
"sec_num": "3.1.3"
},
{
"text": "At each decoding step, a bi-affine classifier is employed to predict the nuclearity and rhetorical relations of two sub-spans e i:k and e k+1:j split by the pointer network. More specifically, the nuclearity labels Nucleus (N) and Satellite (S) are attached together with rhetorical relation labels (e.g., NS-Evaluation, NN-Background). In particular, the EDU representations are first fed to a dense layer with Exponential Linear Unit (ELU) activation for latent feature transformation, and then a bi-affine layer (Dozat and Manning, 2017) with softmax activation is adopted to predict the nuclearity and rhetorical relations.",
"cite_spans": [
{
"start": 515,
"end": 540,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Nuclearity and Relation Classification",
"sec_num": "3.1.4"
},
{
"text": "The training objective of our framework is to minimize the sum of the loss L e of document-level EDU segmentation, the loss L s of parsing the correct tree structure, and the loss L l of predicting the corresponding nuclearity and relation labels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Le(\u03b8e) = \u2212 N n=1 logP \u03b8e (yn|X)",
"eq_num": "(3)"
}
],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Ls(\u03b8s) = \u2212 T t=1 logP \u03b8s (yt|y1, ..., yt\u22121, X) (4) L l (\u03b8 l ) = \u2212 M m=1 R r=1 logP \u03b8 l (ym = r|X)",
"eq_num": "(5)"
}
],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "L total (\u03b8) = \u03bb1Le(\u03b8e) + \u03bb2Ls(\u03b8s)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "+ \u03bb3L l (\u03b8 l ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "where X is the given document, \u03b8 e , \u03b8 s and \u03b8 l are the parameters of the EDU segmenter, the tree structure decoder, and the nuclearity-relation classifier, respectively. N and T are the total token number and span number. y 1 , ..., y t\u22121 denote the sub-trees that have been generated in the previous steps. M is the number of spans with at least two EDUs, and R is the total number of pre-defined nuclearityrelation labels. To find the balance of training multiple objectives, we adopt the adaptive weighting to dynamically control the weights of multiple tasks. Specifically, each task k is weighted by \u03bb k , where \u03bb k is calculated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w k (i \u2212 1) = L k (i \u2212 1) L k (i \u2212 2) (7) \u03bb k (i) = K \u2022 exp(w k (i \u2212 1)/T emp) j exp(wj(i \u2212 1)/T emp)",
"eq_num": "(8)"
}
],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "where i is the training iterations, K is the task number, and T emp represents the temperature value that smooths the loss from re-weighting. In our experimental settings, adopting dynamic weighted loss brought about relative 2.5% improvement on all sub-tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Weighted Loss",
"sec_num": "3.2"
},
{
"text": "Data augmentation is an effective approach to tackle the drawbacks of low resource training by creating additional data from existing samples. For instance, back translation, a popular data augmentation method, is widely applied to tasks like machine translation (Edunov et al., 2018) . Since the wellannotated RST treebanks in different languages share the same underlying linguistic theory, datadriven approaches can benefit from joint learning on multilingual RST resources. In previous work, uniformed the multilingual task to a monolingual one by translating all discourse tree samples at the EDU level to English. In this paper, we propose a cross-translation data augmentation strategy. 2 The method with single direction translation converts all samples to one language in both the training and the inference stage (see Figure 3(a) ). This approach cannot exploit the capability of multilingual language backbones. It also increases the test time due to additional computation for translation. In contrast, cross-translation Table 2 : The collected RST discourse treebanks from 6 languages. We use the split of train, developmental and test set, as well as the data pre-processing following (Braud et al., 2017a) .",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 694,
"end": 695,
"text": "2",
"ref_id": null
},
{
"start": 1199,
"end": 1220,
"text": "(Braud et al., 2017a)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 828,
"end": 839,
"text": "Figure 3(a)",
"ref_id": "FIGREF1"
},
{
"start": 1033,
"end": 1040,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross Translation Augmentation",
"sec_num": "3.3"
},
{
"text": "will convert samples from one language to other languages, to produce multilingual training data (see Figure 3(b) ). Thus the model is able to process multilingual input during inference. As shown in Table 1 , adopting segment-level translation retains the original EDU segmentation as the source text, thus the converted sample in a target language will share the same discourse tree structure and nuclearity/relation labels. We postulate that this text-level transformation will bridge the gaps among different languages. Moreover, since different RST treebanks use articles from different domains , we speculate that adopting cross-translation can also increase domain coverage in the monolingual space, and further improve the model's overall generalization ability.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 113,
"text": "Figure 3(b)",
"ref_id": "FIGREF1"
},
{
"start": 200,
"end": 207,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Cross Translation Augmentation",
"sec_num": "3.3"
},
{
"text": "In this section, we elaborate on experiment settings of the multilingual RST segmentation and parsing task, compare our proposed framework with previous models, and conduct result analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "We constructed a multilingual data collection by merging RST treebanks from 6 languages: English (En) (Carlson et al., 2002) , Brazilian Portuguese (Pt) 3 (Cardoso et al., 2011; Pardo and Nunes, 2004; Collovini et al., 2007; Pardo and Seno, 2005) Table 4 : Document-level multilingual RST parsing comparison of baseline models and our framework. Sp., Nu., and Rel. denote span splitting, nuclearity determination, and relation classification, respectively. Micro F1 scores of RST Parseval (Marcu, 2000) are reported. Here gold EDU segmentation is used for baseline comparison.",
"cite_spans": [
{
"start": 102,
"end": 124,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF6"
},
{
"start": 155,
"end": 177,
"text": "(Cardoso et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 178,
"end": 200,
"text": "Pardo and Nunes, 2004;",
"ref_id": "BIBREF34"
},
{
"start": 201,
"end": 224,
"text": "Collovini et al., 2007;",
"ref_id": "BIBREF8"
},
{
"start": 225,
"end": 246,
"text": "Pardo and Seno, 2005)",
"ref_id": "BIBREF33"
},
{
"start": 489,
"end": 502,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual Dataset",
"sec_num": "4.1"
},
{
"text": "2. We conducted label harmonization (Braud et al., 2017a) to uniform rhetorical definitions among different treebanks. The discourse trees were transformed into a binary format. Unlinked EUDs were removed. Following previous work, we reorganized the discourse relations to 18 categories, and attached the nuclearity labels (i.e., Nucleus-Satellite (NS), Satellite-Nucleus (SN), and Nucleus-Nucleus (NN)) to the relation labels (e.g., Elaboration, Attribution). For each language, we randomly extracted a set of samples for validation. The original training size was 1.1k, and became 6.7k with cross-translation augmentation. The sub-word tokenizer of the 'XLM-RoBERTa-base' (Conneau et al., 2020) is used for input pre-processing.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Braud et al., 2017a)",
"ref_id": "BIBREF2"
},
{
"start": 674,
"end": 696,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Dataset",
"sec_num": "4.1"
},
{
"text": "For EDU segmentation evaluation, micro-averaged F1 score of token-level segment break classification as in (Muller et al., 2019) was used. For tree parsing evaluation, we applied the standard microaveraged F1 scores on Span (Sp.), Nuclearity-Satellite (Nu.), and Rhetorical Relation (Rel.), where Span describes the accuracy of tree structure construction, Nuclearity-Satellite and Rhetorical Relation assesses the ability to categorize the nuclearity and the discourse relations, respectively.",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "(Muller et al., 2019)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "We also adopted Full to evaluate the overall performance considering both Nuclearity-Satellite and Relation together with Span as in (Morey et al., 2017) . Following previous studies, we adopted the same 18 relations defined in (Carlson and Marcu, 2001) . We reported the tree parsing scores in two metrics: the Original Parseval (Morey et al., 2017) and the RST Parseval (Marcu, 2000) for ease of comparison with previous studies.",
"cite_spans": [
{
"start": 133,
"end": 153,
"text": "(Morey et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 228,
"end": 253,
"text": "(Carlson and Marcu, 2001)",
"ref_id": "BIBREF5"
},
{
"start": 330,
"end": 350,
"text": "(Morey et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 372,
"end": 385,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "The proposed framework was implemented with PyTorch (Paszke et al., 2019) and Hugging Face (Wolf et al., 2019) . We used 'XLM-RoBERTa-base' (Conneau et al., 2020) as the language backbone, and fine-tuned its last 8 layers during training. Documents were processed with the sub-word tokenization scheme. The dropout rate of the language backbone was set to 0.2 and that of the rest layers was 0.5. AdamW (Kingma and Ba, 2015) optimization algorithm was used, with the initial learning rate of 2e-5 and a linear scheduler (decay ratio=0.9). Batch size was set to 12. We trained each model for 15 epochs, and selected the best checkpoints on the validation set for evaluation. For each round of evaluation, we repeated the training 5 times with different random seeds and averaged their scores. The Table 5 : Multilingual parsing performance comparison of using gold and predicted EDU segmentation. Sp., Nu., Rel. and Seg. denote span splitting, nuclearity classification, relation determination, and segmentation, respectively. Micro F1 scores of RST Parseval (Marcu, 2000) and Original Parseval (Morey et al., 2017) are reported. Scores from the proposed framework are in bold for better readability.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 91,
"end": 110,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF42"
},
{
"start": 140,
"end": 162,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 1058,
"end": 1071,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF29"
},
{
"start": 1094,
"end": 1114,
"text": "(Morey et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 796,
"end": 803,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Configuration",
"sec_num": "4.3"
},
{
"text": "total trainable parameter size was 91M, where 56M parameters were from fine-tuning 'XLM-RoBERTabase'. All experiments were run on a single Tesla A100 GPU with 40GB memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Configuration",
"sec_num": "4.3"
},
{
"text": "EDU segmentation is the first step of discourse analysis from scratch, and its accuracy is important for the follow-up parsing steps. Thus in this section, we evaluate the performance of our boundary detection segmenter, and compare it with state-of-theart document-level multilingual EDU segmenters (Braud et al., 2017b; Muller et al., 2019) . Additionally, we implemented our model with a pointer mechanism (Vinyals et al., 2015; Li et al., 2018) as a control study. From the results shown in Table 3 , our segmenter outperforms baselines significantly in all languages. This potentially results from adopting the stronger contextualized language backbone (Conneau et al., 2020) . Moreover, conducting EDU segmentation in a sequence labeling manner is more computationally efficient, and achieves higher scores than the pointer-based approach, which is consistent with the observation from a recent sentence-level study (Desai et al., 2020) .",
"cite_spans": [
{
"start": 300,
"end": 321,
"text": "(Braud et al., 2017b;",
"ref_id": "BIBREF3"
},
{
"start": 322,
"end": 342,
"text": "Muller et al., 2019)",
"ref_id": "BIBREF31"
},
{
"start": 409,
"end": 431,
"text": "(Vinyals et al., 2015;",
"ref_id": "BIBREF41"
},
{
"start": 432,
"end": 448,
"text": "Li et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 658,
"end": 680,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 922,
"end": 942,
"text": "(Desai et al., 2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 495,
"end": 502,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "EDU Segmentation Results",
"sec_num": "4.4"
},
{
"text": "We compare the proposed framework with several strong RST parsing baselines: Yu et al. (2018) Model Sp. Nu.",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "Yu et al. (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Parsing Results",
"sec_num": "4.5"
},
{
"text": "Rel. Full (Zhang et al., 2020) 62.3 50.1 40.7 39.6 (Nguyen et al., 2021) 68 proposed a transition-based neural parser, obtaining competitive results in English. Iruskieta and Braud (2019) introduced a multilingual parser for 3 languages (English, Portuguese, and Spanish). proposed a multilingual parser that utilized cross-lingual representation (Cross Rep.), and adopted segment-level translation (Segment Trans.), and produced state-of-theart results on 6 languages. Aside from the proposed model (DMRST), we added an ablation study on the cross-translation strategy (DMRST w/o Cross Trans.). In this section, we use the gold EDU segmentation during the inference stage for a fair comparison to the baselines.",
"cite_spans": [
{
"start": 10,
"end": 30,
"text": "(Zhang et al., 2020)",
"ref_id": "BIBREF44"
},
{
"start": 51,
"end": 72,
"text": "(Nguyen et al., 2021)",
"ref_id": "BIBREF32"
},
{
"start": 161,
"end": 187,
"text": "Iruskieta and Braud (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Parsing Results",
"sec_num": "4.5"
},
{
"text": "From the results shown in Table 4 : (1) Adopting multilingual pre-trained language backbone significantly boosts the RST parsing performance. 2The multilingual model obtains further improvement with the cross-translation augmentation in all sub-tasks and languages. (3) All sub-tasks are improved substantially compared to previous mul- Table 7 : Zero-shot performance comparison of models w/ and w/o cross-translation strategy. Sp., Nu., Rel. and Seg. denote span splitting, nuclearity classification, relation determination, and segmentation, respectively. Micro F1 scores of RST Parseval (Marcu, 2000) and Original Parseval (Morey et al., 2017) are reported.",
"cite_spans": [
{
"start": 591,
"end": 604,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF29"
},
{
"start": 627,
"end": 647,
"text": "(Morey et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 4",
"ref_id": null
},
{
"start": 337,
"end": 344,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual Parsing Results",
"sec_num": "4.5"
},
{
"text": "tilingual baselines (Braud et al., 2017a; . Moreover, our model also outperforms the state-of-the-art English RST parsers (see Table 6 ), demonstrating that fusing multilingual resources is beneficial for monolingual tasks.",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Braud et al., 2017a;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Multilingual Parsing Results",
"sec_num": "4.5"
},
{
"text": "In most previous work on RST parsing, EDU segmentation is regarded as a separate data preprocessing step, and the test samples with gold segmentation are used for evaluation. However, in practical cases, gold EDU segmentation is unavailable. Thus in this section, we assess the proposed framework with the predicted segmentation, simulating the real-world scenario. We compare our model DMRST to the model without cross-translation augmentation (DMRST w/o Cross Trans.). Aside from the common metric RST Parseval (Marcu, 2000) used in many prior studies, we also report test results on the Original Parseval (Morey et al., 2017) . From the results shown in Table 5 , we observe that: (1) EDU segmentation performance of the two models are similar. This is likely because using lexical and syntactic information is sufficient to obtain a reasonable result. (2) For both metrics, our framework achieves overall better performance in all sub-tasks and languages, especially in the lower resource languages like Basque and Dutch. (3) Since the tree structure and nuclearity/relation classification are calculated on the EDU segments, their accuracy are affected significantly by the incorrect segment predictions. For instance, when gold segmentation is provided, DMRST outperforms DMRST w/o Cross Trans. at all fronts. However, the former produces slightly lower scores than the latter in Portuguese, due to its suboptimal segmentation accuracy (92.8 vs. 93.7) . This also emphasizes the importance of EDU segmentation in a successful end-to-end RST parsing system.",
"cite_spans": [
{
"start": 513,
"end": 526,
"text": "(Marcu, 2000)",
"ref_id": "BIBREF29"
},
{
"start": 608,
"end": 628,
"text": "(Morey et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 1442,
"end": 1457,
"text": "(92.8 vs. 93.7)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 657,
"end": 664,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parsing from Scratch",
"sec_num": "4.6"
},
{
"text": "Incorporating discourse information is beneficial to various downstream NLP tasks, but only a small number of languages possess RST treebanks. Such treebanks have limited annotated samples, and it is difficult to extend their sample size due to annotation complexity. To examine if our proposed multilingual framework can be adopted to languages without any monolingual annotated sample (e.g., Italian, Polish), we conducted a zero-shot analysis via language-level cross validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis on Zero-Shot Generalization",
"sec_num": "5"
},
{
"text": "In each round, we select one language as the target language, and RST treebanks from the remaining 5 languages are used to train the multilingual parser. We then evaluate it on the test set from the target language. For example, we assume that a small set of Portuguese articles is to be parsed, and we only have training samples from the other 5 languages (i.e., En, Es, De, Nl, and Eu). Then zero-shot inference is conducted on Portuguese. As shown in Table 7 , compared with full training (see Table 5 ), all the zero-shot evaluation scores drop significantly, especially on English, since the English corpus is the most resourceful and wellannotated RST treebank. Aside from English, the other 5 languages result in acceptable performance for zero-shot inference. With the cross-translation augmentation, the proposed multilingual discourse parser achieves higher scores, this is because (1) the text transformation helps language-level generalization, and (2) the mixed data have a larger domain coverage. For example, combining samples from Basque (science articles) with English (finance news) makes model perform better on Portuguese (science and news articles). This also suggests that the multilingual parser can be extended to other languages via cross-translation augmentation from existing treebanks of 6 languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 454,
"end": 461,
"text": "Table 7",
"ref_id": null
},
{
"start": 497,
"end": 504,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis on Zero-Shot Generalization",
"sec_num": "5"
},
{
"text": "In this work, we proposed a joint framework for document-level multilingual RST discourse parsing, which supports EDU segmentation as well as discourse tree parsing. Experimental results showed that the proposed framework achieves stateof-the-art performance on document-level multilingual discourse parsing on six languages in all aspects. We also demonstrated its inference capability when limited training data is available, and it can be readily extended to other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "For the EDU that only contains one token, its begin and end position are the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The neural machine translation engine from Google is used: https://cloud.google.com/translate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The Portuguese RST dataset consists of 140 samples fromCST-News (Cardoso et al., 2011), 100 samples from Cor-pusTCC (Pardo and Nunes, 2004), 50 samples from Summ-it(Collovini et al., 2007), and 40 samples from Rhetalho(Pardo and Seno, 2005).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by funding from the Institute for Infocomm Research (I2R) under A*STAR ARES, Singapore. We thank Ai Ti Aw for the insightful discussions and Chlo\u00e9 Braud for sharing linguistic resources. We also thank the anonymous reviewers for their precious feedback to help improve and extend this piece of work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Better document-level sentiment analysis from RST discourse parsing",
"authors": [
{
"first": "Parminder",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2212--2218",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1263"
]
},
"num": null,
"urls": [],
"raw_text": "Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 2212-2218, Lis- bon, Portugal. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Cross-lingual RST discourse parsing",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Maximin",
"middle": [],
"last": "Coavoux",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "292--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Maximin Coavoux, and Anders S\u00f8gaard. 2017a. Cross-lingual RST discourse parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 292-304, Valencia, Spain. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cross-lingual and cross-domain discourse segmentation of entire documents",
"authors": [
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "237--243",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2037"
]
},
"num": null,
"urls": [],
"raw_text": "Chlo\u00e9 Braud, Oph\u00e9lie Lacroix, and Anders S\u00f8gaard. 2017b. Cross-lingual and cross-domain discourse segmentation of entire documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 237-243, Vancouver, Canada. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cstnewsa discourse-annotated corpus for single and multidocument summarization of news texts in Brazilian Portuguese",
"authors": [
{
"first": "C",
"middle": [
"F"
],
"last": "Paula",
"suffix": ""
},
{
"first": "Erick",
"middle": [
"G"
],
"last": "Cardoso",
"suffix": ""
},
{
"first": "Mara",
"middle": [
"Luca"
],
"last": "Maziero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Castro Jorge",
"suffix": ""
},
{
"first": "M",
"middle": [
"R"
],
"last": "Eloize",
"suffix": ""
},
{
"first": "Ariani",
"middle": [],
"last": "Seno",
"suffix": ""
},
{
"first": "Lucia",
"middle": [
"Helena"
],
"last": "Di Felippo",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Machado Rino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das Gracas Volpe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thiago",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pardo",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 3rd RST Brazilian Meeting",
"volume": "",
"issue": "",
"pages": "88--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paula CF Cardoso, Erick G Maziero, Mara Luca Cas- tro Jorge, Eloize MR Seno, Ariani Di Felippo, Lu- cia Helena Machado Rino, Maria das Gracas Volpe Nunes, and Thiago AS Pardo. 2011. Cstnews- a discourse-annotated corpus for single and multi- document summarization of news texts in Brazilian Portuguese. In Proceedings of the 3rd RST Brazilian Meeting, pages 88-105.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discourse tagging reference manual",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson and Daniel Marcu. 2001. Discourse tag- ging reference manual. ISI Technical Report ISI-TR- 545, 54:56.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "RST discourse treebank. Linguistic Data Consortium",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Mary Ellen Okurowski, and Daniel Marcu. 2002. RST discourse treebank. Linguistic Data Consortium, University of Pennsylvania.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1179"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 1724- 1734, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Summ-it: Um corpus anotado com informa\u00e7 oes discursivas visandoa sumariza\u00e7 ao autom\u00e1tica",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Collovini",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Thiago",
"suffix": ""
},
{
"first": "Juliana",
"middle": [
"Thiesen"
],
"last": "Carbonel",
"suffix": ""
},
{
"first": "Jorge",
"middle": [
"C\u00e9sar"
],
"last": "Fuchs",
"suffix": ""
},
{
"first": "L\u00facia",
"middle": [],
"last": "Coelho",
"suffix": ""
},
{
"first": "Renata",
"middle": [],
"last": "Rino",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of TIL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Collovini, Thiago I Carbonel, Juliana Thiesen Fuchs, Jorge C\u00e9sar Coelho, L\u00facia Rino, and Renata Vieira. 2007. Summ-it: Um corpus anotado com informa\u00e7 oes discursivas visandoa sumariza\u00e7 ao au- tom\u00e1tica. Proceedings of TIL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the Association for Computational Linguistics. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On the development of the RST Spanish treebank",
"authors": [
{
"first": "Juan-Manuel",
"middle": [],
"last": "Iria Da Cunha",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Torres-Moreno",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sierra",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th Linguistic Annotation Workshop",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iria Da Cunha, Juan-Manuel Torres-Moreno, and Ger- ardo Sierra. 2011. On the development of the RST Spanish treebank. In Proceedings of the 5th Linguis- tic Annotation Workshop, pages 1-10.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Joint learning of syntactic features helps discourse segmentation",
"authors": [
{
"first": "Takshak",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Parag Pravin Dakle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "1073--1080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takshak Desai, Parag Pravin Dakle, and Dan Moldovan. 2020. Joint learning of syntactic features helps discourse segmentation. In Proceedings of The 12th Language Resources and Evaluation Con- ference, pages 1073-1080.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "5th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Discern: Discourse-aware entailment reasoning network for conversational machine reading",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chien-Sheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "C",
"middle": [
"H"
],
"last": "Steven",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Hoi",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "King",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lyu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Gao, Chien-Sheng Wu, Jingjing Li, Shafiq Joty, Steven CH Hoi, Caiming Xiong, Irwin King, and Michael R Lyu. 2020. Discern: Discourse-aware entailment reasoning network for conversational ma- chine reading. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fast rhetorical structure theory discourse parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1505.02425"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Kenji Sagae. 2015. Fast rhetor- ical structure theory discourse parsing. arXiv preprint arXiv:1505.02425.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hilda: A discourse parser using support vector machine classification",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Prendinger",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Dialogue & Discourse",
"volume": "1",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, et al. 2010. Hilda: A discourse parser using sup- port vector machine classification. Dialogue & Dis- course, 1(3).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The RST Basque treebank: an online search interface to check rhetorical relations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mar\u0131a",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "Aranzabe",
"suffix": ""
},
{
"first": "Itziar",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Oier",
"middle": [],
"last": "Lersundi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lopez De Lacalle",
"suffix": ""
}
],
"year": 2013,
"venue": "4th workshop RST and discourse studies",
"volume": "",
"issue": "",
"pages": "40--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Iruskieta, Mar\u0131a J Aranzabe, Arantza Diaz de Ilarraza, Itziar Gonzalez, Mikel Lersundi, and Oier Lopez de Lacalle. 2013. The RST Basque tree- bank: an online search interface to check rhetorical relations. In 4th workshop RST and discourse stud- ies, pages 40-49.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "EusDisParser: improving an under-resourced discourse parser with cross-lingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Iruskieta",
"suffix": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019",
"volume": "",
"issue": "",
"pages": "62--71",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2709"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Iruskieta and Chlo\u00e9 Braud. 2019. EusDisParser: improving an under-resourced discourse parser with cross-lingual data. In Proceedings of the Work- shop on Discourse Relation Parsing and Treebank- ing 2019, pages 62-71, Minneapolis, MN. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 3rd International Conference for Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference for Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Segbot: A generic neural text segmentation model with pointer network",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Li, Aixin Sun, and Shafiq Joty. 2018. Segbot: A generic neural text segmentation model with pointer network. In Proceedings of the 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence, IJCAI-ECAI-2018, Stockholm, Sweden.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Discourse parsing with attention-based hierarchical neural networks",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tianshi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Baobao",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "362--371",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural net- works. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 362-371.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Text-level discourse dependency parsing",
"authors": [
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 25-35.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A unified linear-time framework for sentence-level discourse parsing",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Prathyusha",
"middle": [],
"last": "Jwalapuram",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bari",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4190--4200",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, and M Saiful Bari. 2019. A unified linear-time frame- work for sentence-level discourse parsing. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4190- 4200, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "End-to-end multi-task learning with attention",
"authors": [
{
"first": "Shikun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Johns",
"suffix": ""
},
{
"first": "Andrew J",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1871--1880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shikun Liu, Edward Johns, and Andrew J Davison. 2019. End-to-end multi-task learning with attention. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 1871- 1880.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Exploiting discourse-level segmentation for extractive summarization",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5415"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyuan Liu and Nancy Chen. 2019. Exploiting discourse-level segmentation for extractive summa- rization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 116-121, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multilingual neural RST discourse parsing",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6730--6738",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.591"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyuan Liu, Ke Shi, and Nancy Chen. 2020. Mul- tilingual neural RST discourse parsing. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6730-6738, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Rhetorical structure theory: Toward a functional theory of text organization. Text-interdisciplinary Journal for the Study of Discourse",
"authors": [
{
"first": "C",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Sandra",
"middle": [
"A"
],
"last": "Mann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thompson",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "8",
"issue": "",
"pages": "243--281",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-interdisciplinary Jour- nal for the Study of Discourse, 8(3):243-281.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The rhetorical parsing of unrestricted texts: A surface-based approach",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational linguistics",
"volume": "26",
"issue": "3",
"pages": "395--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu. 2000. The rhetorical parsing of unre- stricted texts: A surface-based approach. Computa- tional linguistics, 26(3):395-448.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "How much progress have we made on RST discourse parsing? a replication study of recent results on the RST-DT",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1319--1324",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1136"
]
},
"num": null,
"urls": [],
"raw_text": "Mathieu Morey, Philippe Muller, and Nicholas Asher. 2017. How much progress have we made on RST discourse parsing? a replication study of recent re- sults on the RST-DT. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1319-1324, Copenhagen, Denmark. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full documents",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Chlo\u00e9",
"middle": [],
"last": "Braud",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Workshop on Discourse Relation Parsing and Treebanking",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {
"DOI": [
"10.18653/v1/W19-2715"
]
},
"num": null,
"urls": [],
"raw_text": "Philippe Muller, Chlo\u00e9 Braud, and Mathieu Morey. 2019. ToNy: Contextual embeddings for accurate multilingual discourse segmentation of full docu- ments. In Proceedings of the Workshop on Dis- course Relation Parsing and Treebanking 2019, pages 115-124, Minneapolis, MN. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "RST parsing from scratch",
"authors": [
{
"first": "Thanh-Tung",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Xuan-Phi",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1613--1625",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.128"
]
},
"num": null,
"urls": [],
"raw_text": "Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, and Xiaoli Li. 2021. RST parsing from scratch. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1613-1625, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Rhetalho: um corpus de refer\u00eancia anotado retoricamente. Anais do V Encontro de Corpora",
"authors": [
{
"first": "Salgueiro",
"middle": [],
"last": "Thiago Alexandre",
"suffix": ""
},
{
"first": "Eloize Rossi Marques",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Seno",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "24--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago Alexandre Salgueiro Pardo and Eloize Rossi Marques Seno. 2005. Rhetalho: um corpus de refer\u00eancia anotado retoricamente. Anais do V Encontro de Corpora, pages 24-25.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Rela\u00e7\u00f5es ret\u00f3ricas e seus marcadores superficiais: An\u00e1lise de um corpus de textos cient\u00edficos em portugu\u00eas do brasil",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Thiago",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Pardo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Das Gra\u00e7as Volpe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nunes",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thiago AS Pardo and Maria das Gra\u00e7as Volpe Nunes. 2004. Rela\u00e7\u00f5es ret\u00f3ricas e seus marcadores superfi- ciais: An\u00e1lise de um corpus de textos cient\u00edficos em portugu\u00eas do brasil. Relat\u00f3rio T\u00e9cnico NILC.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Ad- vances in Neural Information Processing Systems, pages 8024-8035.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The Penn Discourse TreeBank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bon- nie Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Multi-layer discourse annotation of a Dutch text corpus",
"authors": [
{
"first": "Gisela",
"middle": [],
"last": "Redeker",
"suffix": ""
},
{
"first": "Ildik\u00f3",
"middle": [],
"last": "Berzl\u00e1novich",
"suffix": ""
}
],
"year": 2012,
"venue": "age",
"volume": "1",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gisela Redeker, Ildik\u00f3 Berzl\u00e1novich, Nynke Van Der Vliet, Gosse Bouma, and Markus Egg. 2012. Multi-layer discourse annotation of a Dutch text cor- pus. age, 1:2.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Analysis of discourse structure with syntactic dependencies and data-driven shiftreduce parsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 11th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "81--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae. 2009. Analysis of discourse structure with syntactic dependencies and data-driven shift- reduce parsing. In Proceedings of the 11th Inter- national Conference on Parsing Technologies, pages 81-84. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "An end-to-end document-level neural discourse parser exploiting multi-granularity representations",
"authors": [
{
"first": "Ke",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nancy",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.11169"
]
},
"num": null,
"urls": [],
"raw_text": "Ke Shi, Zhengyuan Liu, and Nancy F Chen. 2020. An end-to-end document-level neural discourse parser exploiting multi-granularity representations. arXiv preprint arXiv:2012.11169.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Potsdam commentary corpus 2.0: Annotation for discourse research",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Stede",
"suffix": ""
},
{
"first": "Arne",
"middle": [],
"last": "Neumann",
"suffix": ""
}
],
"year": 2014,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "925--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Stede and Arne Neumann. 2014. Potsdam commentary corpus 2.0: Annotation for discourse research. In LREC, pages 925-929.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in neural in- formation processing systems, pages 2692-2700.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Transition-based neural RST parsing with implicit syntax features",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "559--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Yu, Meishan Zhang, and Guohong Fu. 2018. Transition-based neural RST parsing with implicit syntax features. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 559-570.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A top-down neural architecture towards text-level parsing of discourse rhetorical structure",
"authors": [
{
"first": "Longyin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Peifeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6386--6395",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.569"
]
},
"num": null,
"urls": [],
"raw_text": "Longyin Zhang, Yuqing Xing, Fang Kong, Peifeng Li, and Guodong Zhou. 2020. A top-down neural archi- tecture towards text-level parsing of discourse rhetor- ical structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 6386-6395, Online. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "One constituency tree with RST discourse annotation. e i , N and S denote elementary discourse units, nucleus, and satellite, respectively. Nuclearity and discourse relations are labeled on each span pair.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Overview of single direction translation (a) and cross-translation strategy (b",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": ""
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>English (En)</td><td/><td/><td/></tr><tr><td>-English RST-DT</td><td>309</td><td>38</td><td>38</td></tr><tr><td>-English GUM-DT</td><td>78</td><td>18</td><td>18</td></tr><tr><td>Portuguese (Pt)</td><td>256</td><td>38</td><td>38</td></tr><tr><td>Spanish (Es)</td><td>203</td><td>32</td><td>32</td></tr><tr><td>German (De)</td><td>142</td><td>17</td><td>17</td></tr><tr><td>Dutch (Nl)</td><td>56</td><td>12</td><td>12</td></tr><tr><td>Basque (Eu)</td><td>84</td><td>28</td><td>28</td></tr></table>",
"text": "TreebankLang. Train No. Dev No. Test No."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td>English (En)</td><td/><td colspan=\"3\">Portuguese (Pt)</td><td/><td>Spanish (Es)</td><td/></tr><tr><td>Model</td><td>Sp.</td><td>Nu.</td><td>Rel.</td><td>Sp.</td><td>Nu.</td><td>Rel.</td><td>Sp.</td><td>Nu.</td><td>Rel.</td></tr><tr><td>Yu et al. (2018)</td><td>85.5</td><td>73.1</td><td>60.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Iruskieta and Braud (2019)</td><td>80.9</td><td>65.5</td><td>52.1</td><td>79.7</td><td>62.8</td><td>47.8</td><td>85.4</td><td>65.0</td><td>45.8</td></tr><tr><td>Cross Rep. (Liu et al., 2020)</td><td>87.5</td><td>74.7</td><td>63.0</td><td>86.3</td><td>71.7</td><td>60.0</td><td>86.2</td><td>71.1</td><td>54.4</td></tr><tr><td>Segment Trans. (Liu et al., 2020)</td><td>87.8</td><td>75.4</td><td>63.5</td><td>86.5</td><td>72.0</td><td>60.3</td><td>87.9</td><td>71.4</td><td>56.1</td></tr><tr><td>DMRST w/o Cross Trans.</td><td>87.9</td><td>75.3</td><td>64.0</td><td>86.5</td><td>73.3</td><td>61.5</td><td>88.2</td><td>73.7</td><td>60.3</td></tr><tr><td>DMRST (Our Framework)</td><td>88.2</td><td>76.2</td><td>64.7</td><td>87.0</td><td>74.3</td><td>62.1</td><td>88.7</td><td>75.7</td><td>63.4</td></tr><tr><td/><td/><td>German (De)</td><td/><td/><td>Dutch (Nl)</td><td/><td/><td>Basque (Eu)</td><td/></tr><tr><td>Model</td><td>Sp.</td><td>Nu.</td><td>Rel.</td><td>Sp.</td><td>Nu.</td><td>Rel.</td><td>Sp.</td><td>Nu.</td><td>Rel.</td></tr><tr><td>Cross Rep. (Liu et al., 2020)</td><td>83.6</td><td>62.2</td><td>45.1</td><td>85.9</td><td>64.5</td><td>49.4</td><td>85.1</td><td>65.8</td><td>47.7</td></tr><tr><td>Segment Trans. (Liu et al., 2020)</td><td>82.3</td><td>58.9</td><td>41.0</td><td>84.6</td><td>62.7</td><td>47.2</td><td>84.4</td><td>65.5</td><td>47.3</td></tr><tr><td>DMRST w/o Cross Trans.</td><td>83.1</td><td>62.2</td><td>45.9</td><td>85.5</td><td>64.4</td><td>50.6</td><td>80.2</td><td>59.8</td><td>42.1</td></tr><tr><td>DMRST (Our Framework)</td><td>84.3</td><td>64.1</td><td>47.3</td><td>85.6</td><td>66.3</td><td>52.3</td><td>85.1</td><td>67.2</td><td>48.3</td></tr></table>",
"text": "Document-level multilingual EDU Segmentation performance on 6 languages. Micro F1 scores are reported as in(Muller et al., 2019)."
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Performance comparison on the English RST treebank with predicted EDU segmentation."
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">English (En)</td><td/><td colspan=\"2\">Portuguese (Pt)</td><td/><td colspan=\"2\">Spanish (Es)</td></tr><tr><td>Model</td><td>Sp.</td><td>Nu.</td><td>Rel. Seg.</td><td>Sp.</td><td>Nu.</td><td>Rel. Seg.</td><td>Sp.</td><td>Nu.</td><td colspan=\"2\">Rel. Seg.</td></tr><tr><td colspan=\"2\">Original Parseval (Morey et al., 2017)</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>DMRST w/o RST Parseval (Marcu, 2000)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>DMRST w/o Cross Trans.</td><td colspan=\"10\">57.8 40.7 27.0 78.4 60.4 44.4 31.8 80.9 58.1 42.8 28.3 76.6</td></tr><tr><td>DMRST (Our Framework)</td><td colspan=\"10\">63.4 46.5 30.2 82.7 64.5 50.0 37.7 83.7 65.2 49.3 34.3 82.2</td></tr><tr><td/><td/><td colspan=\"2\">German (De)</td><td/><td colspan=\"2\">Dutch (Nl)</td><td/><td colspan=\"2\">Basque (Eu)</td></tr><tr><td>Model</td><td>Sp.</td><td>Nu.</td><td>Rel. Seg.</td><td>Sp.</td><td>Nu.</td><td>Rel. Seg.</td><td>Sp.</td><td>Nu.</td><td colspan=\"2\">Rel. Seg.</td></tr><tr><td colspan=\"2\">Original Parseval (Morey et al., 2017)</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>DMRST w/o Cross Trans.</td><td colspan=\"8\">43.8 29.3 21.7 87.6 51.8 35.3 27.2 89.0 30.7 17.7</td><td>8.5</td><td>80.5</td></tr><tr><td>DMRST (Our Framework)</td><td colspan=\"10\">49.0 30.7 22.8 88.2 56.5 36.0 27.1 91.0 41.0 30.1 21.3 79.1</td></tr><tr><td>RST Parseval (Marcu, 2000)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>DMRST w/o Cross Trans.</td><td colspan=\"10\">66.1 45.4 30.1 87.6 70.6 50.6 36.4 89.0 55.5 32.5 16.8 80.5</td></tr><tr><td>DMRST (Our Framework)</td><td colspan=\"10\">68.9 46.2 30.3 88.2 73.9 52.3 36.1 91.0 60.3 43.3 28.3 79.1</td></tr></table>",
"text": "Cross Trans. 36.9 26.2 17.8 78.4 39.2 29.5 23.1 80.9 40.0 33.0 26.4 76.6 DMRST (Our Framework) 43.9 30.8 23.3 82.7 44.7 35.8 28.9 83.7 48.1 36.8 29.5 82.2"
}
}
}
}