ACL-OCL / Base_JSON /prefixC /json /codi /2021.codi-main.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:27:21.273472Z"
},
"title": "Improving Multi-Party Dialogue Discourse Parsing via Domain Integration",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Nancy",
"middle": [
"F"
],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While multi-party conversations are often less structured than monologues and documents, they are implicitly organized by semantic level correlations across the interactive turns, and dialogue discourse analysis can be applied to predict the dependency structure and relations between the elementary discourse units, and provide feature-rich structural information for downstream tasks. However, the existing corpora with dialogue discourse annotation are collected from specific domains with limited sample sizes, rendering the performance of data-driven approaches poor on incoming dialogues without any domain adaptation. In this paper, we first introduce a Transformerbased parser, and assess its cross-domain performance. We next adopt three methods to gain domain integration from both data and language modeling perspectives to improve the generalization capability. Empirical results show that the neural parser can benefit from our proposed methods, and performs better on cross-domain dialogue samples.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "While multi-party conversations are often less structured than monologues and documents, they are implicitly organized by semantic level correlations across the interactive turns, and dialogue discourse analysis can be applied to predict the dependency structure and relations between the elementary discourse units, and provide feature-rich structural information for downstream tasks. However, the existing corpora with dialogue discourse annotation are collected from specific domains with limited sample sizes, rendering the performance of data-driven approaches poor on incoming dialogues without any domain adaptation. In this paper, we first introduce a Transformerbased parser, and assess its cross-domain performance. We next adopt three methods to gain domain integration from both data and language modeling perspectives to improve the generalization capability. Empirical results show that the neural parser can benefit from our proposed methods, and performs better on cross-domain dialogue samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text-level discourse parsing is to convert a piece of text into a structured format, by identifying the links and relations between Elementary Discourse Units (EDUs). Incorporating discourse information is proved beneficial for various natural language processing tasks such as machine comprehension (Narasimhan and Barzilay, 2015) and summarization (Xu et al., 2020) . Since discourse parsing is involved in capturing and comprehending various semantic and pragmatic phenomena as well as understanding the structural discourse properties, it is quite challenging for machines to conduct automatic processing. There are a series of studies that provide theories and data for developing computational solutions, such as the Penn Discourse Treebank (PDTB) (Prasad et al., 2008) with sentence-level annotation, and the Rhetorical Figure 1 : A multi-party dialogue example (Shi and Huang, 2019) with discourse link and relation annotation in the STAC Corpus (Asher et al., 2016) . \"Ack.\" is short for relation \"Acknowledgement\", \"QAP.\" for \"Question-Answer-Pair\", and \"Q-Elab.\" for \"Question-Elaboration\". The links in red form a non-projective structure (McDonald et al., 2005) .",
"cite_spans": [
{
"start": 300,
"end": 331,
"text": "(Narasimhan and Barzilay, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 350,
"end": 367,
"text": "(Xu et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 754,
"end": 775,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF17"
},
{
"start": 869,
"end": 890,
"text": "(Shi and Huang, 2019)",
"ref_id": "BIBREF18"
},
{
"start": 954,
"end": 974,
"text": "(Asher et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1151,
"end": 1174,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 827,
"end": 835,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Structure Theory (RST) (Carlson et al., 2002) with document-level annotation. In RST treebanks, each processed passage is in a hierarchical constituencybased tree structure, and adjacent EDUs are merged to form larger spans 1 recursively (Li et al., 2014a) .",
"cite_spans": [
{
"start": 23,
"end": 45,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 238,
"end": 256,
"text": "(Li et al., 2014a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, the Segmented Discourse Representation Theory (SDRT) is proposed for multi-party dialogue discourse parsing (Asher and Lascarides, 2005; Asher et al., 2016) , which is different from RST whose annotations are on documents. Additionally, SDRT-based annotations contain nonprojective links. For example, as shown in Figure 1 , a discourse structure will become non-projective when it is impossible to draw the relations on the same side without crossing (McDonald et al., 2005) . In this case, the constituency-based structure is not applicable. As a result, the SDRT proposed to transform dialogue discourse trees to a dependencybased structure, where EDUs are directly linked to their precedents without forming upper-level spans.",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Asher and Lascarides, 2005;",
"ref_id": "BIBREF1"
},
{
"start": 147,
"end": 166,
"text": "Asher et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 463,
"end": 486,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 324,
"end": 333,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since manual parsing is labor-intensive and timeconsuming, automatic discourse analysis under the Figure 2 : A multi-party dialogue example with its discourse annotation from the Molweni (Li et al., 2020) . SDRT theory raises research interest (Badene et al., 2019) . Previous models show reasonable results on benchmark treebanks (Shi and Huang, 2019) , and utilizing structural information benefits follow-up applications such as dialogue summarization (Feng et al., 2020) . However, domain generality is less studied yet important in practical use cases. Existing treebanks only contain limited training data (as shown in Table 1 ) and limited domain coverage. An SDRT parser trained on strategic game conversations (Asher et al., 2016) may not perform well on technical discussions (Li et al., 2020) , and the suboptimal parsing could further affect downstream task performance. Moreover, due to the annotation complexity, the labeled samples from various domains are not readily available for transfer learning (Yu et al., 2019) . In this paper, we evaluate and improve the crossdomain generality of neural dialogue discourse parsing: (1) we conduct a statistical analysis on existing dialogue discourse treebanks, and figure out the possible factors resulting in the gap across multiple domains from a data perspective; (2) we introduce a Transformer-based neural model for the dependency-based discourse parsing; (3) we propose three methods for better sharing the effective features across dialogue domains: utilizing prior language knowledge, cross-domain pre-training, and vocabulary refinement. Experimental results on STAC (Asher et al., 2016) and Molweni (Li et al., 2020) show that the parsing performance of single-domain training drops significantly on the out-of-domain samples, and it can be improved by our proposed methods.",
"cite_spans": [
{
"start": 187,
"end": 204,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 244,
"end": 265,
"text": "(Badene et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 331,
"end": 352,
"text": "(Shi and Huang, 2019)",
"ref_id": "BIBREF18"
},
{
"start": 455,
"end": 474,
"text": "(Feng et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 719,
"end": 739,
"text": "(Asher et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 786,
"end": 803,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 1016,
"end": 1033,
"text": "(Yu et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 1635,
"end": 1655,
"text": "(Asher et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 1668,
"end": 1685,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 2",
"ref_id": null
},
{
"start": 625,
"end": 632,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we conduct a statistical analysis of three text-level discourse treebanks for data-related factors that potentially affect model generality. RST Discourse Treebank (RST-DT) (Carlson et al., 2002) is the first corpus for text-level document parsing, and contains articles from the Wall Street Journal (WSJ). While it is not in the dialogue domain, we include it for an extensive comparison. STAC (Asher et al., 2016) is the first corpus for multi-party dialogue discourse parsing, and built on 1.2k strategic conversations where participants take discussion during playing an online game.",
"cite_spans": [
{
"start": 190,
"end": 212,
"text": "(Carlson et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 412,
"end": 432,
"text": "(Asher et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora Analysis",
"sec_num": "2"
},
{
"text": "Molweni (Li et al., 2020) follows the same annotation scheme as STAC, and the data (12k samples) are collected from an online forum, where people discuss technical topics about the Ubuntu system. Figure 2 . Numbers in brackets denote the order of link prediction, which is in a sequential manner. This produces a dependency structure.",
"cite_spans": [
{
"start": 8,
"end": 25,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpora Analysis",
"sec_num": "2"
},
{
"text": "The data statistics are summarized in Table 1 . 1Compared with Molweni, the RST-DT and STAC have much smaller sample sizes. (2) Samples from RST-DT have a larger EDU number than STAC and Molweni, resulting in deeper parsed tree structures. The tree depth is one of the major factors that affect parsing complexity. (3) Interestingly, while the word number of Molweni is two times larger than that of STAC, no significant difference in their average EDU numbers, resulting in a similar parsing complexity from a depth perspective. (4) The lexical distributions of STAC and Molweni are significantly different sharing a small portion of common vocabulary (Figure 3 ), as they focus on different conversation scenarios (Game vs. Ubuntu). (5) Despite the domain distinction between STAC and Molweni, their relation distributions are similar, except that frequencies of the relation (Clarification-Question and Comment) are quite different, probably because the online technical forums contain more question-clarification and comments ( Figure 4 ). While STAC and Molweni are annotated under the same SDRT theory, their lexical features and relation distributions are different, which we speculate will influence the domain generality.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 653,
"end": 662,
"text": "(Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 1032,
"end": 1041,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Corpora Analysis",
"sec_num": "2"
},
{
"text": "3 Dialogue Discourse Parsing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora Analysis",
"sec_num": "2"
},
{
"text": "Given a dialogue that has been segmented into a sequence of EDUs {u 0 , u 1 , ..., u n } where n is the EDU number, the discourse parser is applied to predict links and the corresponding relation types between the EDUs. The predicted structure constitutes a dependency tree, which is a special type of Directed Acyclic Graph (DAG). As in previous work (Shi and Huang, 2019) , each EDU is only linked to one of their precedent EDUs, and there are no backward links. As shown in Figure 6 , the parsing process can be conducted by a sequential scan of the EDUs. For one EDU u i , the model predicts a dependency link by estimating a probability distribution as P",
"cite_spans": [
{
"start": 352,
"end": 373,
"text": "(Shi and Huang, 2019)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 477,
"end": 485,
"text": "Figure 6",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "(u j |u i , U pair i ) where 0 \u2264 j < i and U pair i = {(u l , u k , r l,k )|0 \u2264 l < k < i}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "is the set of already predicted pairs before the current step i. The model then determines the relation type based on the predicted link P (r i,j |u i , u j ) where j < i and r i,j is in the range of [0, C] (C is the number of relation types). Following Li et al. (2014b), we add a root node as u 0 , and if one EDU is not linked from any preceding nodes, it is pointed to u 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition",
"sec_num": "3.1"
},
{
"text": "In this paper, based on the sequential parsing process (Shi and Huang, 2019), we introduce a Transformer-based model for dialogue discourse parsing (as shown in Figure 5 ), which is comprised of the following components: Hierarchical Encoder. The encoder computes EDU global representations in a hierarchical manner. A Transformer encoder (Vaswani et al., 2017) is used for token-level encoding. 2",
"cite_spans": [
{
"start": 339,
"end": 361,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "H token = TransformerEnc([t 0 , t 1 , ..., t m ]) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "where t denotes token, and m denotes token number. For the i-th EDU, its local representation h i edu is obtained by averaging 3 its corresponding tokens hidden states. Then the local EDU representations are fed to a bi-directional GRU component (Chung et al., 2014) for dialogue-level encoding, and we get final representations H with both local and global information.",
"cite_spans": [
{
"start": 246,
"end": 266,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "h i = [GRU F orward h i edu ; GRU Backward h i edu ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "Link Prediction. An attentive pointer network (Vinyals et al., 2015) is used for the link prediction. For the i-th EDU, we compute a list of attentive scores with a linear layer between the current node and each candidate h i where j < i. Then scores are normalized by softmax function to a distribution over the previous EDUs, and we obtain the linked EDU with the largest pointing probability.",
"cite_spans": [
{
"start": 46,
"end": 68,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s i,j = Linear([h i ; h j ])",
"eq_num": "(3)"
}
],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a i,j = exp(s i,j ) i j=0 exp(s i,j )",
"eq_num": "(4)"
}
],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "Relation Classification. Given one linked pair is h i and h j , we concatenate and feed them to a relation classifier (a linear component):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r i,j = Linear([h i ; h j ])",
"eq_num": "(5)"
}
],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "then the output is a probability over the 17 predefined discourse relations. For link and relation prediction, the negative log-likelihood is adopted for the loss function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer-Based Discourse Parser",
"sec_num": "3.2"
},
{
"text": "Based on the corpora analysis in Section 2, to improve the domain-level generality, we investigate three methods to encourage the neural model to utilize the shared linguistic features from different dialogue domains. Utilizing Language Backbone. Large-scale pretrained language models provide feature-rich contextualized representations (Devlin et al., 2019) . In previous work, utilizing prior knowledge can boost the performance in parsing tasks, and also shows some but still limited generalization capability at domain and language level . Here, we select the 'RoBERTa-base' model (Liu et al., 2019) as the language backbone. Cross-Domain Pre-training. Following Gururangan et al. (2020), we conduct the masked language modeling pre-training with the joint data of STAC and Molweni. This can fuse dialogue-related linguistic features to the language backbone, which is not pre-trained on human conversations. Moreover, pre-training with multiple data resources can increase the domain coverage, and this step (parsing annotation is not required) can be conducted before the task-specified learning. Cross-Domain Vocabulary Refinement. STIn Section 2, we observe that the vocabulary overlap between STAC and Molweni is limited (see Figure 3 ). Dialogues in Molweni contain a certain amount of technical-related words, whereas STAC contains more game-related words. As the model may overfit corpus-specified lexical features, a vocabulary refinement is adopted by filtering out words that are in lower frequency (< 20 occurrence) and not shared by the two datasets.",
"cite_spans": [
{
"start": 338,
"end": 359,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF5"
},
{
"start": 586,
"end": 604,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 1236,
"end": 1245,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Cross-Domain Integration",
"sec_num": "4"
},
{
"text": "5 Experimental Result and Analysis",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Domain Integration",
"sec_num": "4"
},
{
"text": "The proposed models were implemented using Py-Torch (Paszke et al., 2019) and Hugging Face 4 . Learning rate was set at 2e-5, and the AdamW (Loshchilov and Hutter, 2019) optimizer was ap- plied. We trained each model for 20 epochs, and selected the best checkpoints based on evaluation scores. Input dialogue sequences were processed with the sub-word tokenization scheme used in 'RoBERTa-base' (Liu et al., 2019) . At the inference stage, we adopted the microaveraged F1 score as the evaluation metric. Results of different settings are shown in Table 2-4. \"Link\" denotes link prediction, and \"Link+Rel.\" stands for a prediction that the dependency link and relation type are correct at the same time.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF16"
},
{
"start": 140,
"end": 169,
"text": "(Loshchilov and Hutter, 2019)",
"ref_id": "BIBREF13"
},
{
"start": 395,
"end": 413,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Configuration",
"sec_num": "5.1"
},
{
"text": "To compare performance between single-domain and joint-domain training, we obtain the upper bound parsing results on the merged data of two dialogue discourse treebanks (STAC and Molweni). As shown in Table 2 , models trained on merged data achieve favorable results on both corpora, and perform slightly better than single-domain training. Moreover, our Transformer-based model with the language backbone outperforms the previous state-of-the-art baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Joint Domain Evaluation",
"sec_num": "5.2"
},
{
"text": "To evaluate the effectiveness of the proposed domain integration methods, we conduct singlecorpus training and cross-corpus evaluation (each treebank represents one dialogue domain).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-Domain Evaluation",
"sec_num": "5.3"
},
{
"text": "For single-corpus training on STAC, as shown in Table 3 , the cross-domain performance on Molweni data of all models drops significantly, especially the relation prediction. Utilizing language back-bone brings substantial improvement. This shows that linguistic features can be shared by samples from different treebanks under the SDRT theory. Adopting cross-domain pre-training and vocabulary refinement further improve the performance, and do not affect the original domain. Combining three methods provides the parser a relative 25.3% improvement on the link+relation F1.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Cross-Domain Evaluation",
"sec_num": "5.3"
},
{
"text": "For single-corpus training on Molweni, as shown in Table 4 , baseline models obtain low link+relation F1 scores (around 18.0) on the STAC corpus. Noteworthy, the performance decrease of STAC(train)->Molweni(test) is smaller than that of Molweni(train)->STAC(test), we speculate that this may stem from a larger linguistic diversity in STAC data. The scores are significantly elevated by adopting language backbone, cross-domain pretraining, and vocabulary refinement, achieving a relative 60.6% improvement on link+relation F1.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 58,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Cross-Domain Evaluation",
"sec_num": "5.3"
},
{
"text": "In this paper, we investigated the domain-level generality of dialogue discourse parsing. Since existing corpora are collected from different conversation scenarios, models with single-domain training cannot perform well in other domains. The statistical analysis and experimental results suggest that domain adaptation or integration is necessary when neural parsers are applied in practical use cases, and utilizing prior language knowledge and adopting cross-domain pre-training can improve their generality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The merged spans are named as complex discourse units (CDUs) in which multiple EDUs and/or CDUs are grouped together to form a single argument to a discourse relation(Asher et al., 2016)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Due to space limitation, refer to(Vaswani et al., 2017) for more details of the Transformer architecture.3 We also adopt first-and-last sum and only-first sum for EDU representation, and the averaging performs best.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by funding from the Institute for Infocomm Research (I2R) under A*STAR ARES, Singapore. We thank Ai Ti Aw for the insightful discussions. We also thank the anonymous reviewers for their precious feedback to help improve and extend this piece of work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discourse structure and dialogue acts in multiparty dialogue: the stac corpus",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Julie",
"middle": [],
"last": "Hunter",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Morey",
"suffix": ""
},
{
"first": "Benamara",
"middle": [],
"last": "Farah",
"suffix": ""
},
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "2721--2727",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher, Julie Hunter, Mathieu Morey, Bena- mara Farah, and Stergos Afantenos. 2016. Dis- course structure and dialogue acts in multiparty di- alogue: the stac corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2721-2727.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Logics of conversation",
"authors": [
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2005,
"venue": "Studies in natural language processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicholas Asher and A. Lascarides. 2005. Logics of conversation. In Studies in natural language pro- cessing. Cambridge University Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Weak supervision for learning discourse structure",
"authors": [
{
"first": "Sonia",
"middle": [],
"last": "Badene",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Thompson",
"suffix": ""
},
{
"first": "Jean-Pierre",
"middle": [],
"last": "Lorr\u00e9",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2296--2305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sonia Badene, Kate Thompson, Jean-Pierre Lorr\u00e9, and Nicholas Asher. 2019. Weak supervision for learn- ing discourse structure. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2296-2305.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "RST discourse treebank. Linguistic Data Consortium",
"authors": [
{
"first": "Lynn",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ellen"
],
"last": "Okurowski",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lynn Carlson, Mary Ellen Okurowski, and Daniel Marcu. 2002. RST discourse treebank. Linguistic Data Consortium, University of Pennsylvania.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS 2014 Workshop on Deep Learning, December 2014.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL2019",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL2019, pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dialogue discourse-aware graph convolutional networks for abstractive meeting summarization",
"authors": [
{
"first": "Xiachong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xinwei",
"middle": [],
"last": "Geng",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.03502"
]
},
"num": null,
"urls": [],
"raw_text": "Xiachong Feng, Xiaocheng Feng, Bing Qin, Xin- wei Geng, and Ting Liu. 2020. Dialogue discourse-aware graph convolutional networks for abstractive meeting summarization. arXiv preprint arXiv:2012.03502.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Zihao",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Zekun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wenqiang",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2642--2652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with dis- course structure. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 2642-2652.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Recursive deep models for discourse parsing",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rumeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2061--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Rumeng Li, and Eduard Hovy. 2014a. Recur- sive deep models for discourse parsing. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2061-2069.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Text-level discourse dependency parsing",
"authors": [
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014b. Text-level discourse dependency parsing. In Proceedings of the 52nd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 25-35.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Multilingual neural RST discourse parsing",
"authors": [
{
"first": "Zhengyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Nancy",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6730--6738",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.591"
]
},
"num": null,
"urls": [],
"raw_text": "Zhengyuan Liu, Ke Shi, and Nancy Chen. 2020. Mul- tilingual neural RST discourse parsing. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 6730-6738, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "The International Conference on Learning Representations (ICLR2019",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. The International Con- ference on Learning Representations (ICLR2019).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of human language technology conference and conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of human language technology conference and con- ference on empirical methods in natural language processing, pages 523-530.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Machine comprehension with discourse relations",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1253--1262",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1121"
]
},
"num": null,
"urls": [],
"raw_text": "Karthik Narasimhan and Regina Barzilay. 2015. Ma- chine comprehension with discourse relations. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing, pages 1253-1262, Beijing, China. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NeurIPS2019",
"volume": "",
"issue": "",
"pages": "8026--8037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In Proceed- ings of NeurIPS2019, pages 8026-8037.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Aravind",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The penn discourse treebank 2.0. In LREC. Citeseer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A deep sequential model for discourse parsing on multi-party dialogues",
"authors": [
{
"first": "Zhouxing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "7007--7014",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhouxing Shi and Minlie Huang. 2019. A deep sequen- tial model for discourse parsing on multi-party dia- logues. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7007-7014.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NeurIPS2017",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS2017, pages 5998-6008.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Discourse-aware neural extractive text summarization",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhe",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5021--5031",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.451"
]
},
"num": null,
"urls": [],
"raw_text": "Jiacheng Xu, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Discourse-aware neural extractive text sum- marization. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 5021-5031, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Transfer learning with dynamic adversarial adaptation network",
"authors": [
{
"first": "Chaohui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jindong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiqiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Meiyu",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE International Conference on Data Mining (ICDM)",
"volume": "",
"issue": "",
"pages": "778--786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaohui Yu, Jindong Wang, Yiqiang Chen, and Meiyu Huang. 2019. Transfer learning with dynamic ad- versarial adaptation network. In 2019 IEEE Interna- tional Conference on Data Mining (ICDM), pages 778-786. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Word level vocabulary overlap of three textlevel discourse treebanks. The vocabulary sizes ofRST- DT, STAC, and Molweni are 17824, 3642, and 18936, respectively.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF1": {
"text": "Discourse relation distributions of STAC and Molweni. X axis denotes the label frequency.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF2": {
"text": "Overview of the dependency-based discourse parsing framework.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Illustration of parsing process on the dialogue example shown in",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "F1 scores of link and relation prediction with models trained on the joint data (STAC+Molweni).",
"html": null,
"content": "<table><tr><td>Train on STAC</td><td>Link</td><td>Link+Rel.</td></tr><tr><td colspan=\"3\">Deep Sequential Parser (Shi and Huang, 2019)</td></tr><tr><td>Test on STAC</td><td>73.1</td><td>55.7</td></tr><tr><td>Test on Molweni</td><td>58.6</td><td>26.2</td></tr><tr><td colspan=\"2\">Our Transformer-Based Parser</td><td/></tr><tr><td>Test on STAC</td><td>73.4</td><td>55.5</td></tr><tr><td>Test on Molweni</td><td>57.8</td><td>26.4</td></tr><tr><td colspan=\"2\">+ Utilizing Language Backbone</td><td/></tr><tr><td>Test on STAC</td><td>75.3 [2.5% \u2191]</td><td>56.9 [2.5% \u2191]</td></tr><tr><td>Test on Molweni</td><td>60.7 [5.0% \u2191]</td><td>31.5 [19.3% \u2191]</td></tr><tr><td colspan=\"2\">+ Cross-Domain Pre-training</td><td/></tr><tr><td>Test on STAC</td><td>75.1 [2.3% \u2191]</td><td>57.1 [2.8% \u2191]</td></tr><tr><td>Test on Molweni</td><td>62.1 [7.4% \u2191]</td><td>32.6 [23.4% \u2191]</td></tr><tr><td colspan=\"2\">+ Cross-Domain Vocabulary Refinement</td><td/></tr><tr><td>Test on STAC</td><td>75.3 [2.3% \u2191]</td><td>57.1 [2.8% \u2191]</td></tr><tr><td>Test on Molweni</td><td>63.2 [9.3% \u2191]</td><td>33.1 [25.3% \u2191]</td></tr></table>",
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "Micro-F1 scores of link and relation prediction with models trained on STAC. Values in brackets denote relative increase over the base model.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF6": {
"type_str": "table",
"text": "Micro-F1 scores of link and relation prediction with models trained on Molweni. Values in brackets denote relative increase over the base model.",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}