ACL-OCL / Base_JSON /prefixD /json /dialdoc /2022.dialdoc-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:31:26.072982Z"
},
"title": "MSAMSum: Towards Benchmarking Multi-lingual Dialogue Summarization",
"authors": [
{
"first": "Xiachong",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": "[email protected]"
},
{
"first": "Fr",
"middle": [],
"last": "Zh",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently. However, current works mainly focus on English dialogue summarization, leaving other languages less well explored. Therefore, we present a multilingual dialogue summarization dataset, namely MSAMSum, which covers dialogue-summary pairs in six languages. Specifically, we derive MSAMSum from the standard SAMSum (Gliwa et al., 2019) using sophisticated translation techniques and further employ two methods to ensure the integral translation quality and summary factual consistency. Given the proposed MSAMum, we systematically set up five multilingual settings for this task, including a novel mix-lingual dialogue summarization setting. To illustrate the utility of our dataset, we benchmark various experiments with pre-trained models under different settings and report results in both supervised and zero-shot manners. We also discuss some future works towards this task to motivate future researches 1 .",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Dialogue summarization helps users capture salient information from various types of dialogues has received much attention recently. However, current works mainly focus on English dialogue summarization, leaving other languages less well explored. Therefore, we present a multilingual dialogue summarization dataset, namely MSAMSum, which covers dialogue-summary pairs in six languages. Specifically, we derive MSAMSum from the standard SAMSum (Gliwa et al., 2019) using sophisticated translation techniques and further employ two methods to ensure the integral translation quality and summary factual consistency. Given the proposed MSAMum, we systematically set up five multilingual settings for this task, including a novel mix-lingual dialogue summarization setting. To illustrate the utility of our dataset, we benchmark various experiments with pre-trained models under different settings and report results in both supervised and zero-shot manners. We also discuss some future works towards this task to motivate future researches 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent years have witnessed increasing interest in dialogue summarization (Feng et al., 2021a; Tuggener et al., 2021) . It aims to distill the most important information from various types of dialogues, which can alleviate the problem of communication data overload. Towards this research direction, various datasets have been proposed to promote this task.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Feng et al., 2021a;",
"ref_id": null
},
{
"start": 95,
"end": 117,
"text": "Tuggener et al., 2021)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The AMI (Carletta et al., 2005) and ICSI (Janin et al., 2003) datasets provide the initial opportunity for meeting summarization. With the advent of data-hungry neural models and pre-trained language models, Gliwa et al. (2019) come up with the first high quality large-scale dialogue summarization dataset, namely SAMSum, which resurges this Figure 1 : A multi-lingual meeting scenario, in which multinational people participate in one meeting concurrently. It is valuable to provide them with summaries in a preferred language.",
"cite_spans": [
{
"start": 8,
"end": 31,
"text": "(Carletta et al., 2005)",
"ref_id": "BIBREF2"
},
{
"start": 41,
"end": 61,
"text": "(Janin et al., 2003)",
"ref_id": "BIBREF14"
},
{
"start": 208,
"end": 227,
"text": "Gliwa et al. (2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "task. Then, various datasets are proposed to meet different needs and scenarios (Chen et al., 2021a; Malykh et al., 2020; Rameshkumar and Bailey, 2020; Zhong et al., 2021; Zhu et al., 2021; Chen et al., 2021b; Zhang et al., 2021; Fabbri et al., 2021) . Despite the encouraging progresses achieved, current works overwhelmingly focused on English. Meanwhile, with the help of instantaneous translation systems 2 , a dialogue involving multinational participants becomes more and more common and frequent. Therefore, it is valuable to provide them with dialogue summaries in a preferred language.",
"cite_spans": [
{
"start": 80,
"end": 100,
"text": "(Chen et al., 2021a;",
"ref_id": null
},
{
"start": 101,
"end": 121,
"text": "Malykh et al., 2020;",
"ref_id": "BIBREF20"
},
{
"start": 122,
"end": 151,
"text": "Rameshkumar and Bailey, 2020;",
"ref_id": "BIBREF23"
},
{
"start": 152,
"end": 171,
"text": "Zhong et al., 2021;",
"ref_id": "BIBREF34"
},
{
"start": 172,
"end": 189,
"text": "Zhu et al., 2021;",
"ref_id": "BIBREF35"
},
{
"start": 190,
"end": 209,
"text": "Chen et al., 2021b;",
"ref_id": "BIBREF5"
},
{
"start": 210,
"end": 229,
"text": "Zhang et al., 2021;",
"ref_id": "BIBREF32"
},
{
"start": 230,
"end": 250,
"text": "Fabbri et al., 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To this end, we propose a multi-lingual dialogue summarization task. The practical benefits of this task are twofold: it not only provides rapid access to the salient content, but also enables the dissemination of relevant content across participants of other languages. Intuitively, to achieve this goal, we need to answer two key questions, one is Where do we get data resources for this multi-lingual research? the other is How do we perform various multi-lingual settings?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the first question, we seek for potential available resources that can support our multi-lingual research. Although creating English datasets has proven feasible, the need for dialogues and summary-written experts in different languages makes the collection of multi-lingual datasets highly costing or even intractable. To mitigate this challenge, we devote our efforts to constructing the multi-lingual dataset via sophisticated translation techniques following Zhu et al. (2019) . Firstly, we select SAMSum (Gliwa et al., 2019) as our source English dataset because of its large scale and wide domain coverage. Then, we translate it into five other official languages of the United Nations via high-performance translation API, including Chinese, French, Arabic, Russian and Spanish. Furthermore, We employ two methods: round-trip translation and textual entailment to filter out lowquality translations and ensure the factual consistency at both the dialogue-level and summary-level. Finally, we obtain our MSAMSum dataset as the data resource for this multi-lingual research.",
"cite_spans": [
{
"start": 467,
"end": 484,
"text": "Zhu et al. (2019)",
"ref_id": null
},
{
"start": 513,
"end": 533,
"text": "(Gliwa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the second question, given the wellconstructed MSAMsum dataset, we set up various settings for our multi-lingual dialogue summarization task, including ONE-TO-ONE, MANY-TO-ONE, ONE-TO-MANY and MANY-TO-MANY. The ONE-TO-ONE setting can be further divided into Mono-lingual and Cross-lingual settings. To further boost the research on multi-lingual dialogue summarization, we creatively propose one new setting, namely MIX-TO-MANY, which takes a mixlingual dialogue as input and produce summaries in different languages. This setting is in line with the real world scenario that multinational participants can use their mother tongue to communicate with each other by means of instantaneous translation systems (depicted in Figure 1 ). To sum up, we set up five settings for the research on the whole scene of multi-lingual dialogue summarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 725,
"end": 733,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To illustrate the utility of our MSAMSum, we conduct extensive experiments under five multilingual settings based on the current multi-lingual pre-trained model mBART-50 (Tang et al., 2020) , and evaluate it in both supervised and zero-shot manners. The results reveal the feasibility of multilingual dialogue summarization task. The case study also shows that the multi-lingual model is able to produce fluent and factual consistency summaries in different languages. We further conclude several future works to prompt future researches.",
"cite_spans": [
{
"start": 170,
"end": 189,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multi-lingual summarization is a valuable research direction, which can benefit users from various countries (Cao et al., 2020; Wang et al., 2022) . Especially, cross-lingual summarization, which receives a document in a source language and produces a summary in a another language, has attracted lots of research attentions (Wan et al., 2010) . For a long time, pipeline systems combining both machine translation and summarization tools are used to solve this problem (Ouyang et al., 2019) . However, pipeline systems do have their own drawbacks, like error propagation and system latency. Therefore, researchers turn to end-to-end neural methods. Zhu et al. (2019) first propose two cross-lingual summarization datasets using machine translation techniques. Afterwards, various models (Zhu et al., 2020b; Xu et al., 2020; and datasets (Ladhak et al., 2020; Hasan et al., 2021; Varab and Schluter, 2021 ) are proposed for this task. These works have achieved great progresses and have proved the feasibility of end-to-end multi-lingual summarization. In this paper, for the first time, we study the dialogue summarization task under various multi-lingual settings.",
"cite_spans": [
{
"start": 109,
"end": 127,
"text": "(Cao et al., 2020;",
"ref_id": "BIBREF1"
},
{
"start": 128,
"end": 146,
"text": "Wang et al., 2022)",
"ref_id": null
},
{
"start": 325,
"end": 343,
"text": "(Wan et al., 2010)",
"ref_id": "BIBREF28"
},
{
"start": 470,
"end": 491,
"text": "(Ouyang et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 650,
"end": 667,
"text": "Zhu et al. (2019)",
"ref_id": null
},
{
"start": 788,
"end": 807,
"text": "(Zhu et al., 2020b;",
"ref_id": null
},
{
"start": 808,
"end": 824,
"text": "Xu et al., 2020;",
"ref_id": "BIBREF31"
},
{
"start": 838,
"end": 859,
"text": "(Ladhak et al., 2020;",
"ref_id": "BIBREF15"
},
{
"start": 860,
"end": 879,
"text": "Hasan et al., 2021;",
"ref_id": "BIBREF11"
},
{
"start": 880,
"end": 904,
"text": "Varab and Schluter, 2021",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-lingual Summarization",
"sec_num": "2.1"
},
{
"text": "The earlier publicly available meeting datasets AMI (Carletta et al., 2005) and ICSI (Janin et al., 2003) have prompted dialogue summarization for a long time. Recently, the introduction of SAMSum dataset has resurged this direction. Researchers propose various methods to tackle this problem by incorporating auxiliary information, modeling the interaction and dealing with long input sequences (Chen and Yang, 2020; Feng et al., 2021b; Zhu et al., 2020a; Feng et al., 2021c) . Additionally, various valuable datasets are carried out to meet different needs, which further accelerate the development of dialogue summarization (Zhong et al., 2021; Zhu et al., 2021; Zhang et al., 2021) . What is more, Mehnaz et al. (2021) study dialogue summarization under the Hindi-English code-switched setting and get the best performance based on multilingual pre-trained language models. Nonetheless, the current datasets and models are mainly tailored for English, which leave other languages less well explored. To mitigate this challenge, we propose the MSAMSum to study the multi-lingual dialogue summarization task. Figure 2 : Illustration of our data construction process. (a) Given the original English data in the SAMSum (Gliwa et al., 2019) , we translate it into another language (e.g., Chinese). Furthermore, we employ two quality controlling methods: round-trip translation and textual entailment. (c) For the first method, we back-translate the Chinese data into English and (d) calculate the ROUGE score between the original one and the back-translated one. (e) For the second one, we calculate the entailment score between back-translated summary and the original summary. If both scores exceed the pre-defined threshold, the translated dialogue-summary pair is retained.",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Carletta et al., 2005)",
"ref_id": "BIBREF2"
},
{
"start": 85,
"end": 105,
"text": "(Janin et al., 2003)",
"ref_id": "BIBREF14"
},
{
"start": 396,
"end": 417,
"text": "(Chen and Yang, 2020;",
"ref_id": "BIBREF3"
},
{
"start": 418,
"end": 437,
"text": "Feng et al., 2021b;",
"ref_id": "BIBREF8"
},
{
"start": 438,
"end": 456,
"text": "Zhu et al., 2020a;",
"ref_id": null
},
{
"start": 457,
"end": 476,
"text": "Feng et al., 2021c)",
"ref_id": "BIBREF9"
},
{
"start": 627,
"end": 647,
"text": "(Zhong et al., 2021;",
"ref_id": "BIBREF34"
},
{
"start": 648,
"end": 665,
"text": "Zhu et al., 2021;",
"ref_id": "BIBREF35"
},
{
"start": 666,
"end": 685,
"text": "Zhang et al., 2021)",
"ref_id": "BIBREF32"
},
{
"start": 1219,
"end": 1239,
"text": "(Gliwa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 1111,
"end": 1119,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dialogue Summarization",
"sec_num": "2.2"
},
{
"text": "In this section, we introduce our MSAMSum dataset, including (1) Why we choose SAMSum dataset? (2) How we translate the original SAM-Sum dataset? (3) How we control the translation quality? and (4) Statistics for the newly created MSAMSum dataset. The whole dataset construction process is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 307,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The MSAMSum Dataset",
"sec_num": "3"
},
{
"text": "Current dialogue summarization datasets are mainly tailored for English (Gliwa et al., 2019; Chen et al., 2021a,b; Zhang et al., 2021) , resulting in existing works not centring on other languages. In order to support our multi-lingual research, we follow Zhu et al. (2019) , which uses state-of-the-art machine translation techniques to construct datasets in different languages. Before launching the translation of the current dataset, we first need to choose a suitable dataset. After carefully comparing several datasets, we finally choose SAMSum (Gliwa et al., 2019) as our source English dataset according to the following two reasons: (1) it is a human-labeled large-scale dataset; (2) it covers a wide range of domains.",
"cite_spans": [
{
"start": 72,
"end": 92,
"text": "(Gliwa et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 93,
"end": 114,
"text": "Chen et al., 2021a,b;",
"ref_id": null
},
{
"start": 115,
"end": 134,
"text": "Zhang et al., 2021)",
"ref_id": "BIBREF32"
},
{
"start": 256,
"end": 273,
"text": "Zhu et al. (2019)",
"ref_id": null
},
{
"start": 551,
"end": 571,
"text": "(Gliwa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Selection",
"sec_num": "3.1"
},
{
"text": "For each dialogue-summary pair in the selected English SAMSum dataset (shown in Figure 2 (a)), we translate the utterances and the summary to the target language (shown in Figure 2 (b)) via high-performance machine translation service 3 . To make our work more representative and generalized, we choose five other official languages of the United Nations as our translation target languages 4 . Note that for each dialogue, we perform the translation at the utterance-level since machine translation can achieve good results with utterances of moderate length. After this process, we can get dialogue-summary pairs in Chinese (Zh), French (Fr), Arabic (Ar), Russian (Ru), Spanish(ES) and also original English (En).",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 2",
"ref_id": null
},
{
"start": 172,
"end": 180,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation",
"sec_num": "3.2"
},
{
"text": "To ensure the data quality, we further leverage two quality controlling methods. First, we employ round-trip translation strategy at both dialogue and summary level to filter out low-quality translations. Second, at the summary level, we use textual entailment strategy to verify factual consistency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Controlling",
"sec_num": "3.3"
},
{
"text": "Round-trip translation is the process of translating a text into another language (forward translation), then translating the result back into the original language (back translation), using MT service. Given the translated dialogue-summary pair in target language (shown in Figure 2 (b)), we back-translate it into the original English version (shown in Figure 2 (c)). Afterward, we follow Zhu et al. (2019) and calculate the ROUGE-1 score (Lin, 2004) between the original dialogue-summary pair and the backtranslated dialogue-summary pair (shown in Figure 2 (d)). In detail, we first calculate the ROUGE-1 score for the corresponding utterances and the sum- mary respectively, and then get the final ROUGE-1 score by averaging all scores. If the final ROUGE-1 score exceeds the pre-defined threshold, the translated dialogue-summary pair (shown in Figure 2 (b)) is retained. Otherwise, the pair will be filtered 5 .",
"cite_spans": [
{
"start": 392,
"end": 409,
"text": "Zhu et al. (2019)",
"ref_id": null
},
{
"start": 442,
"end": 453,
"text": "(Lin, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 2",
"ref_id": null
},
{
"start": 355,
"end": 364,
"text": "Figure 2",
"ref_id": null
},
{
"start": 552,
"end": 561,
"text": "Figure 2",
"ref_id": null
},
{
"start": 852,
"end": 860,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Round-trip Translation",
"sec_num": "3.3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "One-to-One One-to-Many Cross-lingual Summarizer Multi-lingual Summarizer English Dialogue Chinese Summary Multi-lingual Summarizer (b) (d)",
"eq_num": "(e"
}
],
"section": "Round-trip Translation",
"sec_num": "3.3.1"
},
{
"text": "Since the summary serves as the core part of dialogue summarization, it not only needs coarsegrained surface-level high quality but also finegrained factual consistency (Huang et al., 2021) . To this end, we adopt the textual entailment method to access whether the translated summary is consistent with the original summary. Specifically, we obtain the entailment score for the translated English summary and the original English summary via state-of-the-art entailment model 6 , as shown in Figure 2 (e). If the entailment score exceeds the predefined threshold, the translated dialogue-summary pair is retained. Otherwise, the pair will be filtered.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Huang et al., 2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 493,
"end": 501,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Textual Entailment",
"sec_num": "3.3.2"
},
{
"text": "Following the above steps, we can get translated and pure datasets in different languages. Note that these datasets are of different sizes, which is caused by the quality controlling process. To unify our experiments, we get the intersection of these datasets in six languages, resulting in the final MSAMSum dataset (statistics in Table 1 : Statistics for MSAMSum dataset. \"#\" means the number of dialogue-summary pairs, \"Avg.Turns\", \"Avg.Tokens\", \"Avg.Chars\" and \"Avg.Sum\" mean the average number of turns of dialogues, tokens of dialogues, characters of dialogues and tokens of summaries respectively. Note that sentences in Arabic tend to be shorter than those in other languages 8 .",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Datasets Alignment and Statistics",
"sec_num": "3.4"
},
{
"text": "In this section, we introduce various multi-lingual dialogue summarization settings, including a newly proposed MIX-TO-MANY setting. All settings are depicted in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-lingual Settings",
"sec_num": "4"
},
{
"text": "The ONE-TO-ONE setting can be viewed as a specific type of multi-lingual setting, where the model can merely handle the input of one language and the output of one language. According to whether the Figure 4 : Illustration of the mix-lingual dialogue construction process. Given one English dialogue, we first group utterances for the same participant and get the averaged round-trip translation ROUGE-1 score for each language.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "ONE-TO-ONE",
"sec_num": "4.1"
},
{
"text": "\u2461 Zh \u2462 Ru (a) (b) (c) (d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONE-TO-ONE",
"sec_num": "4.1"
},
{
"text": "Then, we adopt a greedy search strategy to assign each participant a language. Finally, we can get the mix-lingual dialogue associated with summaries in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONE-TO-ONE",
"sec_num": "4.1"
},
{
"text": "input and output belong to the same language, this setting can be further divided into Mono-lingual setting (shown in Figure 3(a) ) and Cross-lingual setting (shown in Figure 3(b) ).",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 129,
"text": "Figure 3(a)",
"ref_id": null
},
{
"start": 168,
"end": 179,
"text": "Figure 3(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "ONE-TO-ONE",
"sec_num": "4.1"
},
{
"text": "Experimental Setting: For mono-lingual experiments, we train six models based on {En\u2192En}, {Zh\u2192Zh}, {Fr\u2192Fr}, {Ar\u2192Ar}, {Ru\u2192Ru} and {Es\u2192Es} mono-lingual pairs respectively. For cross-lingual experiments, we train two models based on {En\u2192Zh} and {Zh\u2192En} cross-lingual pairs respectively. All eight models are tested in supervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONE-TO-ONE",
"sec_num": "4.1"
},
{
"text": "MANY-TO-ONE models are able to process dialogues in various languages and output the summary in one language, as shown in Figure 3(c) . On the contrary, ONE-TO-MANY models have the ability to produce summaries in various languages given a fixed language input, as shown in Figure 3(d) . Both settings require models with multilingual capabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 133,
"text": "Figure 3(c)",
"ref_id": null
},
{
"start": 273,
"end": 285,
"text": "Figure 3(d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "MANY-TO-ONE and ONE-TO-MANY",
"sec_num": "4.2"
},
{
"text": "Experimental Setting: For MANY-TO-ONE experiments, we train one model based on all {En\u2192En, Zh\u2192En, Fr\u2192En, Ar\u2192En, Ru\u2192En, Es\u2192En} pairs. For ONE-TO-MANY experiments, we train one model based on all {En\u2192En, En\u2192Zh, En\u2192Fr, En\u2192Ar, En\u2192Ru, En\u2192Es} pairs. These two models are tested in supervised manner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-ONE and ONE-TO-MANY",
"sec_num": "4.2"
},
{
"text": "As shown in Figure 3 (e), MANY-TO-MANY models can take dialogues in various languages as inputs and produce summaries in various languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "MANY-TO-MANY",
"sec_num": "4.3"
},
{
"text": "Thanks to the pre-trained multi-lingual language models Tang et al., 2020) , based on which, MANY-TO-MANY models can perform zero-shot summarization even though the inputoutput language pair is not seen during the training process.",
"cite_spans": [
{
"start": 56,
"end": 74,
"text": "Tang et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-MANY",
"sec_num": "4.3"
},
{
"text": "Experimental Setting: For MANY-TO-MANY experiments, we train one model based on all {En\u2192En, Zh\u2192Zh, Fr\u2192Fr, Ar\u2192Ar, Ru\u2192Ru, Es\u2192Es} pairs and test it in both supervised and zero-shot manners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-MANY",
"sec_num": "4.3"
},
{
"text": "Nowadays, dialogue participants from different countries can use their mother tongue to communicate with each other based on instantaneous translation systems. To investigate the possibility of generating summaries directly from mix-lingual dialogues (utterances in different languages), we come up with an innovative new setting: MIX-TO-MANY, as shown in Figure 3(f) . To this end, we first simulate the real scenario and construct mix-lingual dialogue-summary pairs, the whole construction process is shown in Figure 4 . Given each English dialogue in MSAMSum (shown in Figure 4(a) ), we first group utterances by participants, which results in several groups for different participants (shown in Figure 4(b) ). Then, for each group, we calculate the average roundtrip translation ROUGE-1 score for each language (shown in Figure 4(c) ). Afterward, we adopt a greedy search strategy to assign each participant a language (shown in Figure 4(d) ). The goal of our strategy is twofold: choose as many languages as possible and as high-quality translations as possi- ble. Finally, we can get the mix-lingual dialogue, in which utterances are in different languages. The number of mix-lingual dialogues is in line with MSAMSum. The statistics for mix-lingual dialogues are shown in Figure 5 . Finally, we pair the mix-lingual dialogue with summaries in different languages (shown in Figure 4 (e)). ",
"cite_spans": [],
"ref_spans": [
{
"start": 356,
"end": 367,
"text": "Figure 3(f)",
"ref_id": null
},
{
"start": 512,
"end": 521,
"text": "Figure 4",
"ref_id": null
},
{
"start": 573,
"end": 584,
"text": "Figure 4(a)",
"ref_id": null
},
{
"start": 700,
"end": 711,
"text": "Figure 4(b)",
"ref_id": null
},
{
"start": 826,
"end": 837,
"text": "Figure 4(c)",
"ref_id": null
},
{
"start": 934,
"end": 945,
"text": "Figure 4(d)",
"ref_id": null
},
{
"start": 1280,
"end": 1288,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 1381,
"end": 1389,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "MIX-TO-MANY",
"sec_num": "4.4"
},
{
"text": "In this section, we first introduce our model mBART-50. After, we describe the evaluation metrics. Finally, we show the implementation details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We employ mBART-50 (Tang et al., 2020) as our multi-lingual summarizer, which is a Transformerbased model and pre-trained on a huge volume of multi-lingual data. It is derived from mBART and extends the language processing capabilities from 25 languages to 50 languages in total. The architecture of mBART-50 is based on the BART , which adopts position-wise feed-forward network, multi-head attention (Vaswani et al., 2017) , residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) modules to map the source dialogue into dis-tributed representations and further generate the target summary. To handle various input and output languages, mBART-50 needs to receive inputs with language identifiers (e.g., En, Zh) at both the encoder and the decoder side. According to the practical experience, we set both the source language identifier and target language identifier at the start of the source and target sequences respectively.",
"cite_spans": [
{
"start": 19,
"end": 38,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF24"
},
{
"start": 402,
"end": 424,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF27"
},
{
"start": 447,
"end": 464,
"text": "(He et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 489,
"end": 506,
"text": "(Ba et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backbone Model",
"sec_num": "5.1"
},
{
"text": "The most widely used metrics for summarization are ROUGE scores (Lin, 2004) . However, the original ROUGE is specifically designed for English. To make this metric suitable for our experiments, we employ the multi-lingual ROUGE (Hasan et al., 2021) as our evaluation metrics, which takes segmentation and popular stemming algorithms for various languages into consideration 9 .",
"cite_spans": [
{
"start": 64,
"end": 75,
"text": "(Lin, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 228,
"end": 248,
"text": "(Hasan et al., 2021)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "5.2"
},
{
"text": "For MSAMSum construction, we set round-trip translation ROUGE-1 threshold to 80.00 and the textual entailment threshold to 0.9. For experiments, we use the standard mBART-50 implementation provided by Huggingface/transformers 10 . For fine-tuning process, the learning rate is set to 5e-06, the dropout rate is 0.1, the warmup is set to 2000 and the batch size is 4. In the test process, beam size is 5, the minimum decoded length is 10 and the maximum length is 150. All our experiments are conducted based on the Tesla-V100-32GB GPU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.3"
},
{
"text": "In this section, we describe experimental results and show our analyses for different settings. Table 2 shows the results for ONE-TO-ONE setting, including both the mono-lingual and the cross-lingual experiments. According to the 52.98 ROUGE-1 score achieved by fine-tuning BARTlarge on full English SAMSum dataset (Chen and Yang, 2020) , we can see that our experiments achieve impressive results. For mono-lingual experiments, Ar\u2192Ar results perform worse than others to some extent, we attribute this to the fact that the Arabic language processing capability of the Table 2 : Test set results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the ONE-TO-ONE setting, where \"R\" is short for \"ROUGE\". pre-trained mBART-50 is relatively weak, which is in line with the size of original pre-training corpus . For cross-lingual experiments, surprisingly, we find that En\u2192Zh get better results compared with Zh\u2192Zh, which may due to the model's strong English comprehension ability. Table 3 and table 4 show results for MANY-TO-ONE and ONE-TO-MANY settings respectively. For both settings, we find that the results of the multi-lingual model varied less between pairs compared with ONE-TO-ONE models. For the MANY-TO-ONE model, the results of En\u2192En and Zh\u2192En are slightly worse than results of corresponding single ONE-TO-ONE models. This is because the MANY-TO-ONE model needs to handle multiple languages, which may cause the parameters interference problem (Lin et al., 2021) , and is therefore inferior to a single expert model. In contrast, the ONE-TO-MANY model improves the performance of both En\u2192En and En\u2192Zh results, which shows the ONE-TO-MANY training setting enhances the model's English understanding ability. Additionally, both Ar\u2192En and En\u2192Ar get relatively lower results, which coincide with the findings in ONE-TO-ONE experiments. Table 5 shows ROUGE-L results for the MANY-TO-MANY setting 11 . We test each language pair in the cartesian product of six languages, which results in two types of manners: supervised and zeroshot summarization. For the supervised manner (results in bold), almost all results show the best performance. For the zero-shot manner (results in italics), we find that despite the model is fine-tuned based on mono-lingual dialogue-summary pairs, it still has the strong ability to perform summarization across different languages. In line with previous experiments, we find the MANY-TO-MANY model that balances across various languages inevitably loses some performances compared with the ONE-TO-ONE model. Nonetheless, the MANY- TO-MANY model, which greatly reduces the deployment cost while preserving the performance, is an important research direction in the future. Table 6 shows the results for the MIX-TO-MANY setting. As the first step towards this direction, we find that current multi-lingual pre-trained models can obtain encouraging results. The Mix\u2192Es, Mix\u2192Zh, Mix\u2192Fr and Mix\u2192Ru models achieve comparable results with respect to the corresponding ONE-TO-ONE model. These results verify that despite the multi-lingual model only deals with one language at a time in the pre-training progress, after fine-tuning, it can handle mix-lingual inputs concurrently. Surprisingly, the Mix\u2192Ar results even surpass the performance of singe Ar\u2192Ar model. We think this is due to the mix-lingual dialogue essentially acts as an utterance-level code-switching data, which helps the representation space of the low-resource language align with other languages. This also inspire us that it would be better to generate the low-resource language summary directly from the mix-lingual dialogue. Figure 6 shows summaries in different languages generated by the ONE-TO-MANY model for an example English dialogue. We can see that all the generated summaries achieve good ROUGE performance, with English being the highest. We find that the multi-lingual model can generate fluent summaries while preserving the important information of the dialogue. Besides, the model also has the ability to accurately express participants information (e.g., Elliot, Jordan) and keep entities' factual consistency (e.g., 8 pm) across different languages. ",
"cite_spans": [
{
"start": 315,
"end": 336,
"text": "(Chen and Yang, 2020)",
"ref_id": "BIBREF3"
},
{
"start": 1487,
"end": 1505,
"text": "(Lin et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 2",
"ref_id": null
},
{
"start": 569,
"end": 576,
"text": "Table 2",
"ref_id": null
},
{
"start": 1010,
"end": 1029,
"text": "Table 3 and table 4",
"ref_id": "TABREF8"
},
{
"start": 1875,
"end": 1882,
"text": "Table 5",
"ref_id": "TABREF11"
},
{
"start": 2741,
"end": 2748,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 3659,
"end": 3667,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "ONE-TO-ONE Src\u2192Tgt R-1 R-2 R-L Mono-lingual",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONE-TO-ONE Results",
"sec_num": "6.1"
},
{
"text": "Src\u2192Tgt R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-ONE",
"sec_num": null
},
{
"text": "ONE-TO-MANY Src\u2192Tgt R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-ONE and ONE-TO-MANY Results",
"sec_num": "6.2"
},
{
"text": "MIX-TO-MANY Src\u2192Tgt R-1 R-2 R-L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MANY-TO-MANY Results",
"sec_num": "6.3"
},
{
"text": "In this paper, we innovatively explore the multilingual dialogue summarization task. To this end, we carefully create MSAMSum as our testbed, which covers dialogue-summary pairs in six languages, including English, Chinese, Russian, French, Arabic and Spanish. Furthermore, we systematically set up five multi-lingual settings to benchmark extensive experiments. Our results indicate that various models can achieve impressive performance based on pre-trained models. Besides, the newly proposed MIX-TO-MANY setting also shows its effectiveness in low-resource scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "In the future, we think several concerns need to be addressed for this task. Firstly, multi-lingual models tend to underperform mono-lingual models; Secondly, low-resource languages tend to perform poorly; Thirdly, the difficulty of aligning finegrained information in different languages. Future works should pay particular attention to these concerns to facilitate this multi-lingual dialogue summarization research direction. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "As we propose a new multi-lingual dialogue summarization dataset and conduct experiments based on large pre-trained language models, we make several clarifications to address potential concerns:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Ethical Considerations",
"sec_num": null
},
{
"text": "\u2022 Dataset: Since our MSAMSum is derived from the SAMSum (Gliwa et al., 2019) , which is a well-constructed and human-labelled dataset. Therefore, our dataset inherits the contents of SAMSum and does not contain toxic information.",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "(Gliwa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Ethical Considerations",
"sec_num": null
},
{
"text": "\u2022 Model: The experiments described in this paper are based on the mBART-50-large (Tang et al., 2020) and make use of V100 GPUs. Despite we run dozens of experiments, our results could help reduce parameter searches for future works. We also consider to alleviate such resource-hungry challenge by exploring light-weight distilled models.",
"cite_spans": [
{
"start": 81,
"end": 100,
"text": "(Tang et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Ethical Considerations",
"sec_num": null
},
{
"text": "B Round-trip Translation ROUGE Scores Table 7 shows the average ROUGE scores between the English data in SAMSum (Gliwa et al., 2019) and the round-trip translated English data. These results indicate the overall translation quality. C The Changing of Data Size Table 8 shows how the data size changes. After quality controlling process, we can get different data size for different languages (before alignment). After taking the intersection of different languages, we get our final MSAMSum (after alignment).",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Gliwa et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 7",
"ref_id": "TABREF16"
},
{
"start": 261,
"end": 268,
"text": "Table 8",
"ref_id": "TABREF17"
}
],
"eq_spans": [],
"section": "A Ethical Considerations",
"sec_num": null
},
{
"text": "D Detailed MANY-TO-MANY Results Table 9 shows detailed ROUGE-1, ROUGE-2 and ROUGE-L results for MANY-TO-MANY experiments in both supervised and zero-shot manners, as a supplement to Table 5 . Table 9 : Test set ROUGE-1/ROUGE-2/ROUGE-L results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the MANY-TO-MANY setting. Results in bold are achieved by supervised summarization. Results in italics are achieved by zero-shot summarization.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 9",
"ref_id": null
},
{
"start": 182,
"end": 189,
"text": "Table 5",
"ref_id": "TABREF11"
},
{
"start": 192,
"end": 199,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Ethical Considerations",
"sec_num": null
},
{
"text": "https://translatebyhumans.com/en/services/ interpretation/zoom/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://cloud.google.com/translate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.un.org/en/our-work/official-languages",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We show detailed round-trip translation ROUGE scores in the supplementary file.6 https://github.com/pytorch/fairseq/blob/main/examples /roberta/README.md7 We show the statistics for different parts before alignment in the supplementary file.8 https://forum.wordreference.com/threads/english-toarabic-length-change.1495268/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/csebuetnlp/xl-sum/tree/master/ multilingual_rouge_scoring 10 https://huggingface.co/facebook/mbart-large-50-manyto-many-mmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We show all ROUGE-1, ROUGE-2 and ROUGE-L scores in the supplementary file.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key RD Program of China via grant 2020AAA0106502, National Natural Science Foundation of China (NSFC) via grant 61976073 and Shenzhen Foundational Research Funding (JCYJ20200109113441941).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Elliot no puede hablar porque est\u00e1 ocupado.Jordan va a un funeral de su colega, Brad, que tuvo un c\u00e1ncer de hep\u00e1tica.Eliot llamar\u00e1 a Jordan a las 8 p.m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spanish",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Layer normalization",
"authors": [
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Ba",
"suffix": ""
},
{
"first": "Jamie",
"middle": [
"Ryan"
],
"last": "Kiros",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. 2016. Layer normalization. In arXiv.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multisumm: Towards a unified model for multilingual abstractive summarization",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jin-Ge",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Dian",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Cao, Xiaojun Wan, Jin-ge Yao, and Dian Yu. 2020. Multisumm: Towards a unified model for multi- lingual abstractive summarization. In The Thirty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applica- tions of Artificial Intelligence Conference, IAAI 2020. AAAI Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The ami meeting corpus: A pre-announcement",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Sebastien",
"middle": [],
"last": "Bourban",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Flynn",
"suffix": ""
},
{
"first": "Mael",
"middle": [],
"last": "Guillemot",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hain",
"suffix": ""
},
{
"first": "Jaroslav",
"middle": [],
"last": "Kadlec",
"suffix": ""
},
{
"first": "Vasilis",
"middle": [],
"last": "Karaiskos",
"suffix": ""
},
{
"first": "Wessel",
"middle": [],
"last": "Kraaij",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Kronenthal",
"suffix": ""
}
],
"year": 2005,
"venue": "International workshop on machine learning for multimodal interaction",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta, Simone Ashby, Sebastien Bourban, Mike Flynn, Mael Guillemot, Thomas Hain, Jaroslav Kadlec, Vasilis Karaiskos, Wessel Kraaij, Melissa Kronenthal, et al. 2005. The ami meeting corpus: A pre-announcement. In International workshop on ma- chine learning for multimodal interaction. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-view sequenceto-sequence models with conversational structure for abstractive dialogue summarization",
"authors": [
{
"first": "Jiaao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Diyi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.336"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaao Chen and Diyi Yang. 2020. Multi-view sequence- to-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "and Kevin Gimpel. 2021a. Summscreen: A dataset for abstractive screenplay summarization",
"authors": [
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zewei",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.07091"
]
},
"num": null,
"urls": [],
"raw_text": "Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2021a. Summscreen: A dataset for ab- stractive screenplay summarization. arXiv preprint arXiv:2104.07091.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dialsumm: A real-life scenario dialogue summarization dataset",
"authors": [
{
"first": "Yulong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2105.06762"
]
},
"num": null,
"urls": [],
"raw_text": "Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021b. Dialsumm: A real-life scenario dialogue summarization dataset. arXiv preprint arXiv:2105.06762.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "ConvoSumm: Conversation summarization benchmark and improved abstractive summarization with argument mining",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Faiaz",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Imad",
"middle": [],
"last": "Rizvi",
"suffix": ""
},
{
"first": "Borui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "6866--6880",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.535"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Fabbri, Faiaz Rahman, Imad Rizvi, Borui Wang, Haoran Li, Yashar Mehdad, and Dragomir Radev. 2021. ConvoSumm: Conversation summa- rization benchmark and improved abstractive sum- marization with argument mining. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6866-6880, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021a. A survey on dialogue summarization: Recent advances and new frontiers",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021a. A survey on dialogue summarization: Recent ad- vances and new frontiers. ArXiv, abs/2107.03175.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dialogue discourse-aware graph model and data augmentation for meeting summarization",
"authors": [
{
"first": "Xiachong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xinwei",
"middle": [],
"last": "Geng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. International Joint Conferences on Artificial Intelligence Organization. Main Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.24963/ijcai.2021/524"
]
},
"num": null,
"urls": [],
"raw_text": "Xiachong Feng, Xiaocheng Feng, Bing Qin, and Xinwei Geng. 2021b. Dialogue discourse-aware graph model and data augmentation for meeting summarization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. Inter- national Joint Conferences on Artificial Intelligence Organization. Main Track.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Language model as an annotator: Exploring DialoGPT for dialogue summarization",
"authors": [
{
"first": "Xiachong",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaocheng",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Libo",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.117"
]
},
"num": null,
"urls": [],
"raw_text": "Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, and Ting Liu. 2021c. Language model as an annota- tor: Exploring DialoGPT for dialogue summarization. In Proceedings of the 59th Annual Meeting of the As- sociation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Gliwa",
"suffix": ""
},
{
"first": "Iwona",
"middle": [],
"last": "Mochol",
"suffix": ""
},
{
"first": "Maciej",
"middle": [],
"last": "Biesek",
"suffix": ""
},
{
"first": "Aleksander",
"middle": [],
"last": "Wawer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5409"
]
},
"num": null,
"urls": [],
"raw_text": "Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Alek- sander Wawer. 2019. SAMSum corpus: A human- annotated dialogue dataset for abstractive summa- rization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "XLsum: Large-scale multilingual abstractive summarization for 44 languages",
"authors": [
{
"first": "Tahmid",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Abhik",
"middle": [],
"last": "Bhattacharjee",
"suffix": ""
},
{
"first": "Md",
"middle": [],
"last": "Islam",
"suffix": ""
},
{
"first": "Kazi",
"middle": [],
"last": "Mubasshir",
"suffix": ""
},
{
"first": "Yuan-Fang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yong-Bin",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Rifat",
"middle": [],
"last": "Shahriyar",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.413"
]
},
"num": null,
"urls": [],
"raw_text": "Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Is- lam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XL- sum: Large-scale multilingual abstractive summariza- tion for 44 languages. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.90"
]
},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recogni- tion. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. The factual inconsistency problem in abstractive text summarization: A survey",
"authors": [
{
"first": "Yichong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.14839"
]
},
"num": null,
"urls": [],
"raw_text": "Yichong Huang, Xiachong Feng, Xiaocheng Feng, and Bing Qin. 2021. The factual inconsistency problem in abstractive text summarization: A survey. arXiv preprint arXiv:2104.14839.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The icsi meeting corpus",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Janin",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Baron",
"suffix": ""
},
{
"first": "Jane",
"middle": [],
"last": "Edwards",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Ellis",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gelbart",
"suffix": ""
},
{
"first": "Nelson",
"middle": [],
"last": "Morgan",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Peskin",
"suffix": ""
},
{
"first": "Thilo",
"middle": [],
"last": "Pfau",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. 2003. The icsi meeting corpus. In ICASSP. IEEE.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Wikilingua: A new benchmark dataset for multilingual abstractive summarization",
"authors": [
{
"first": "Faisal",
"middle": [],
"last": "Ladhak",
"suffix": ""
},
{
"first": "Esin",
"middle": [],
"last": "Durmus",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faisal Ladhak, Esin Durmus, Claire Cardie, and Kath- leen McKeown. 2020. Wikilingua: A new bench- mark dataset for multilingual abstractive summariza- tion. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: Findings.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Abdelrahman",
"middle": [],
"last": "Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning language specific sub-network for multilingual machine translation",
"authors": [
{
"first": "Zehui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.25"
]
},
"num": null,
"urls": [],
"raw_text": "Zehui Lin, Liwei Wu, Mingxuan Wang, and Lei Li. 2021. Learning language specific sub-network for multilingual machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multilingual denoising pretraining for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00343"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transactions of the Association for Computational Linguistics, 8.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "SumTitles: a summarization dataset with low extractiveness",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Malykh",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Chernis",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.503"
]
},
"num": null,
"urls": [],
"raw_text": "Valentin Malykh, Konstantin Chernis, Ekaterina Arte- mova, and Irina Piontkovskaya. 2020. SumTitles: a summarization dataset with low extractiveness. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain (On- line).",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Gupshup: An annotated corpus for abstractive summarization of opendomain code-switched conversations",
"authors": [
{
"first": "Laiba",
"middle": [],
"last": "Mehnaz",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Mahata",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Gosangi",
"suffix": ""
},
{
"first": "Uma",
"middle": [],
"last": "Sushmitha Gunturi",
"suffix": ""
},
{
"first": "Riya",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Gauri",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Amardeep",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Acharya",
"suffix": ""
},
{
"first": "Rajiv Ratn",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.08578"
]
},
"num": null,
"urls": [],
"raw_text": "Laiba Mehnaz, Debanjan Mahata, Rakesh Gosangi, Uma Sushmitha Gunturi, Riya Jain, Gauri Gupta, Amardeep Kumar, Isabelle Lee, Anish Acharya, and Rajiv Ratn Shah. 2021. Gupshup: An anno- tated corpus for abstractive summarization of open- domain code-switched conversations. arXiv preprint arXiv:2104.08578.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A robust abstractive system for cross-lingual summarization",
"authors": [
{
"first": "Jessica",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Boya",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Kathy",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1204"
]
},
"num": null,
"urls": [],
"raw_text": "Jessica Ouyang, Boya Song, and Kathy McKeown. 2019. A robust abstractive system for cross-lingual summarization. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), Minneapolis, Minnesota. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Storytelling with dialogue: A Critical Role Dungeons and Dragons Dataset",
"authors": [
{
"first": "Revanth",
"middle": [],
"last": "Rameshkumar",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Bailey",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.459"
]
},
"num": null,
"urls": [],
"raw_text": "Revanth Rameshkumar and Peter Bailey. 2020. Sto- rytelling with dialogue: A Critical Role Dungeons and Dragons Dataset. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Multilingual translation with extensible multilingual pretraining and finetuning",
"authors": [
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Chau",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.00401"
]
},
"num": null,
"urls": [],
"raw_text": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual translation with exten- sible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Are we summarizing the right way? a survey of dialogue summarization data sets",
"authors": [
{
"first": "Don",
"middle": [],
"last": "Tuggener",
"suffix": ""
},
{
"first": "Margot",
"middle": [],
"last": "Mieskes",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Deriu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Cieliebak",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Third Workshop on New Frontiers in Summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Don Tuggener, Margot Mieskes, Jan Deriu, and Mark Cieliebak. 2021. Are we summarizing the right way? a survey of dialogue summarization data sets. In Pro- ceedings of the Third Workshop on New Frontiers in Summarization, Online and in Dominican Republic. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Mas-siveSumm: a very large-scale, very multilingual, news summarisation dataset",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Varab",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Schluter",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Varab and Natalie Schluter. 2021. Mas- siveSumm: a very large-scale, very multilingual, news summarisation dataset. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Do- minican Republic.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cross-language document summarization based on machine translation quality prediction",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Huiying",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, Uppsala, Sweden. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Contrastive aligned joint learning for multilingual summarization",
"authors": [
{
"first": "Danqing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jiaze",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.242"
]
},
"num": null,
"urls": [],
"raw_text": "Danqing Wang, Jiaze Chen, Hao Zhou, Xipeng Qiu, and Lei Li. 2021. Contrastive aligned joint learning for multilingual summarization. In Findings of the Asso- ciation for Computational Linguistics: ACL-IJCNLP 2021, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Jianfeng Qu, and Jie Zhou. 2022. A survey on cross-lingual summarization",
"authors": [
{
"first": "Jiaan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Duo",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yunlong",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Zhixu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2203.12515"
]
},
"num": null,
"urls": [],
"raw_text": "Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, and Jie Zhou. 2022. A survey on cross-lingual summarization. arXiv preprint arXiv:2203.12515.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Mixed-lingual pretraining for cross-lingual summarization",
"authors": [
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xuedong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruochen Xu, Chenguang Zhu, Yu Shi, Michael Zeng, and Xuedong Huang. 2020. Mixed-lingual pre- training for cross-lingual summarization. In Proceed- ings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, Suzhou, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "EmailSum: Abstractive email thread summarization",
"authors": [
{
"first": "Shiyue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.537"
]
},
"num": null,
"urls": [],
"raw_text": "Shiyue Zhang, Asli Celikyilmaz, Jianfeng Gao, and Mohit Bansal. 2021. EmailSum: Abstractive email thread summarization. In Proceedings of the 59th",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "QMSum: A new benchmark for querybased multi-domain meeting summarization",
"authors": [
{
"first": "Ming",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Da Yin",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Mutethia",
"middle": [],
"last": "Zaidi",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Mutuma",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Jha",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Hassan Awadallah",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Dragomir",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.472"
]
},
"num": null,
"urls": [],
"raw_text": "Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Hassan Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, and Dragomir Radev. 2021. QMSum: A new benchmark for query- based multi-domain meeting summarization. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "MediaSum: A large-scale media interview dataset for dialogue summarization",
"authors": [
{
"first": "Chenguang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.naacl-main.474"
]
},
"num": null,
"urls": [],
"raw_text": "Chenguang Zhu, Yang Liu, Jie Mei, and Michael Zeng. 2021. MediaSum: A large-scale media interview dataset for dialogue summarization. In Proceedings",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"text": "Statistics for mix-lingual dialogues. (a) We show the language distribution by calculating the number of dialogues containing one specific language; (b) We provide the distribution of the number of languages included in the dialogue.",
"num": null,
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Example English dialogue in the MSAMSum dataset and summaries in different languages generated by the ONE-TO-MANY model. The scores in square brackets are R-1, R-2 and R-L respectively.",
"num": null,
"uris": null
},
"TABREF0": {
"html": null,
"content": "<table><tr><td>Original Dialogue-Summary Pair</td><td>Translated Dialogue-Summary Pair</td><td>Back-translated Dialogue-Summary Pair</td><td>Quality Controlling</td></tr><tr><td>Hey , do you have</td><td>!\"#$%&amp;'()</td><td>Hey, do you have</td><td/></tr><tr><td>Betty's number ?</td><td>*+,-</td><td>Betty's phone number?</td><td>\u2265T</td></tr><tr><td>Lemme check</td><td>./01234</td><td>let me check</td><td/></tr><tr><td>Sorry , can't find it .</td><td>CDE\"FDG4</td><td>sorry, I can't find it.</td><td>&lt;T</td></tr><tr><td>Fine.</td><td>=H@4</td><td>It doesn't matter.</td><td/></tr><tr><td>Ask Larry.</td><td>IIAB4</td><td>Ask Larry.</td><td/></tr><tr><td/><td/><td/><td>\u2265T</td></tr><tr><td>Hannah needs Betty's number but Amanda doesn't have it . She needs to contact Larry .</td><td>5678%&amp;'()*+\"9 :;&lt;=$4&gt;78?@AB</td><td>Hannah needs Betty's phone number, but Amanda doesn't. She needs to contact Larry.</td><td>&lt;T</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Many-to-One</td><td/><td/><td/><td/><td colspan=\"2\">Many-to-Many</td><td/><td colspan=\"2\">Mix-to-Many</td></tr><tr><td>English Dialogue</td><td>Chinese Dialogue</td><td>French Dialogue</td><td>...</td><td colspan=\"2\">English Dialogue</td><td/><td>Chinese Dialogue</td><td>French Dialogue</td><td>...</td><td colspan=\"2\">Multi-lingual Dialogue</td></tr><tr><td>English</td><td colspan=\"2\">Multi-lingual</td><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Multi-lingual</td></tr><tr><td>Summarizer</td><td colspan=\"2\">Summarizer</td><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Summarizer</td></tr><tr><td>English Summary</td><td colspan=\"2\">English Summary</td><td/><td>Chinese Summary</td><td>English Summary</td><td>...</td><td>English Summary</td><td>Chinese Summary</td><td>...</td><td>English Summary</td><td>Chinese Summary</td><td>...</td></tr><tr><td>(a)</td><td/><td>(c)</td><td/><td/><td/><td/><td/><td>)</td><td/><td/><td>(f)</td></tr><tr><td>Figure 3:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": "Illustration of different multi-lingual settings. We set up five settings in total, according to the number of input and output languages the model can handle. Concretely, the ONE-TO-ONE is the basic setting, the MANY-TO-ONE model encodes N languages and decodes to English, while the ONE-TO-MANY model encodes English and decodes into N languages, the MANY-TO-MANY model encodes and decodes N languages. Besides, we originally explore one new MIX-TO-MANY setting, where the model takes a mix-lingual dialogue (utterances in a dialogue belongs to different languages) as input and outputs summaries in different languages.",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td/><td/><td>Train</td><td>Valid</td><td>Test</td></tr><tr><td/><td>#</td><td>5307</td><td>302</td><td>320</td></tr><tr><td/><td>Avg.Turns</td><td>11.01</td><td>10.48</td><td>11.15</td></tr><tr><td>En</td><td colspan=\"4\">Avg.Tokens 115.72 115.19 118.21 Avg.Sum 22.18 22.33 22.06</td></tr><tr><td>Zh</td><td>Avg.Chars Avg.Sum</td><td colspan=\"3\">242.08 237.39 246.95 34.65 35.36 35.08</td></tr><tr><td>Fr</td><td colspan=\"2\">Avg.Tokens 99.33 Avg.Sum 19.30</td><td>99.01 19.47</td><td>102.5 19.16</td></tr><tr><td>Ar</td><td colspan=\"2\">Avg.Tokens 57.17 Avg.Sum 18.81</td><td>55.85 18.71</td><td>56.63 18.80</td></tr><tr><td>Ru</td><td colspan=\"2\">Avg.Tokens 89.00 Avg.Sum 15.99</td><td>88.53 16.07</td><td>91.11 16.11</td></tr><tr><td>Es</td><td colspan=\"2\">Avg.Tokens 89.83 Avg.Sum 18.67</td><td>89.35 18.60</td><td>92.08 18.68</td></tr></table>",
"type_str": "table",
"text": ") 7 .",
"num": null
},
"TABREF6": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Zh\u2192Zh 40.11 16.93 33.48</td></tr><tr><td>Fr\u2192Fr</td><td>41.77 19.20 34.47</td></tr><tr><td colspan=\"2\">Ru\u2192Ru 37.95 15.74 31.76</td></tr><tr><td colspan=\"2\">Ar\u2192Ar 28.66 6.61 23.07</td></tr><tr><td/><td>Cross-lingual</td></tr><tr><td colspan=\"2\">Zh\u2192En 45.75 20.18 36.90</td></tr><tr><td colspan=\"2\">En\u2192Zh 42.62 17.43 34.88</td></tr></table>",
"type_str": "table",
"text": "En\u2192En 49.16 24.18 40.15 Es\u2192Es 43.95 20.01 35.87",
"num": null
},
"TABREF8": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Test set results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the MANY-TO-ONE setting.",
"num": null
},
"TABREF9": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "En\u2192En 49.84 24.73 40.67 En\u2192Es 47.27 21.82 37.87 En\u2192Zh 43.86 18.25 35.56 En\u2192Fr 44.33 19.58 35.20 En\u2192Ru 41.26 15.76 33.00 En\u2192Ar 39.71 14.96 32.82",
"num": null
},
"TABREF10": {
"html": null,
"content": "<table><tr><td>Src\u2192Tgt En Zh Fr Ar Ru Es</td><td>MANY-TO-MANY Zh Fr Ar 36.79 30.83 30.76 20.93 28.35 34.51 En Ru Es 18.46 35.56 30.65 25.93 30.03 33.01 22.90 31.77 36.25 26.25 29.94 34.01 14.64 20.69 20.72 23.47 19.74 22.94 22.57 32.02 30.08 25.27 33.28 32.58 27.74 32.09 31.97 25.75 30.11 37.21</td></tr></table>",
"type_str": "table",
"text": "Test set results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the ONE-TO-MANY setting.",
"num": null
},
"TABREF11": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Test set R-L results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the MANY-TO-MANY setting. Results in bold are achieved by supervised summarization. Results in italics are achieved by zero-shot summarization.",
"num": null
},
"TABREF12": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Mix\u2192En 44.68 17.78 35.17 Mix\u2192Es 43.51 18.08 34.75 Mix\u2192Zh 40.76 15.76 33.14 Mix\u2192Fr 41.50 17.04 32.76 Mix\u2192Ru 38.26 13.38 30.75 Mix\u2192Ar 36.06 12.09 29.60",
"num": null
},
"TABREF13": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Test set results on the different language pairs of MSAMSum dataset by fine-tuning mBART-50 under the MIX-TO-MANY setting.",
"num": null
},
"TABREF14": {
"html": null,
"content": "<table><tr><td>Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Ji-ajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics.</td></tr><tr><td>Junnan Zhu, Yu Zhou, Jiajun Zhang, and Chengqing Zong. 2020b. Attend, translate and summarize: An efficient method for neural cross-lingual summariza-tion. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, On-line. Association for Computational Linguistics.</td></tr></table>",
"type_str": "table",
"text": "of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online. Association for Computational Linguistics. Chenguang Zhu, Ruochen Xu, Michael Zeng, and Xuedong Huang. 2020a. A hierarchical network for abstractive meeting summarization with cross-domain pretraining. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online. Association for Computational Linguistics.",
"num": null
},
"TABREF15": {
"html": null,
"content": "<table><tr><td/><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td/><td colspan=\"3\">Zh 84.47 60.80 86.69</td></tr><tr><td>Valid</td><td colspan=\"3\">Ru 75.57 46.81 78.56 Es 74.85 46.19 77.99 Ar 75.97 48.09 78.93</td></tr><tr><td/><td colspan=\"3\">Fr 75.24 46.74 78.40</td></tr><tr><td/><td colspan=\"3\">Zh 84.11 59.91 86.32</td></tr><tr><td>Test</td><td colspan=\"3\">Ru 75.74 47.18 78.67 Es 74.68 45.63 77.84 Ar 75.56 47.24 78.48</td></tr><tr><td/><td colspan=\"3\">Fr 75.15 46.39 78.33</td></tr></table>",
"type_str": "table",
"text": "Train Zh 84.57 60.87 86.77 Ru 75.97 47.70 78.91 Es 75.05 46.43 78.19 Ar 76.09 48.13 79.02 Fr 75.53 47.02 78.68",
"num": null
},
"TABREF16": {
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Train Valid Test</td></tr><tr><td/><td>Original</td><td/><td/></tr><tr><td colspan=\"3\">SAMSum 14732 818</td><td>819</td></tr><tr><td/><td colspan=\"2\">Before alignment</td><td/></tr><tr><td>Zh</td><td colspan=\"2\">11738 658</td><td>660</td></tr><tr><td>Ru</td><td>6089</td><td>329</td><td>354</td></tr><tr><td>Es</td><td>6697</td><td>369</td><td>370</td></tr><tr><td>Ar</td><td>6341</td><td>340</td><td>337</td></tr><tr><td>Fr</td><td>7523</td><td>426</td><td>417</td></tr><tr><td/><td colspan=\"2\">After alignment</td><td/></tr><tr><td>Final</td><td>5307</td><td>302</td><td>320</td></tr></table>",
"type_str": "table",
"text": "The average ROUGE scores between each original English data in the SAMSum(Gliwa et al., 2019) and corresponding round-trip translated English data for five languages.",
"num": null
},
"TABREF17": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "The size of datasets at different stages.",
"num": null
},
"TABREF18": {
"html": null,
"content": "<table><tr><td>Src\u2192Tgt En</td><td>48.00</td><td>En</td><td>Zh</td><td>MANY-TO-MANY Fr</td><td>Ar</td><td>Ru</td><td>Es</td></tr></table>",
"type_str": "table",
"text": "/22.29/36.79 37.51/13.82/30.83 38.81/14.56/30.76 24.48/8.16/20.93 34.50/11.49/28.35 42.86/17.38/34.51 Zh 24.24/8.37/18.46 43.75/19.14/35.56 39.80/13.96/30.65 32.28/10.10/25.93 37.82/12.87/30.03 41.97/16.08/33.01 Fr 29.71/08.69/22.90 39.53/13.73/31.77 45.26/21.60/36.25 31.92/10.34/26.25 37.11/12.17/29.94 42.59/16.59/34.01 Ar 18.75/3.74/14.64 25.27/6.36/20.69 26.46/6.30/20.72 29.15/7.76/23.47 24.48/5.04/19.74 29.24/6.89/22.94 Ru 30.88/9.99/22.57 39.80/14.46/32.02 38.29/13.84/30.08 30.72/9.49/25.27 41.50/15.95/33.28 41.53/15.18/32.58 Es 37.18/12.14/27.74 39.79/15.05/32.09 41.04/15.91/31.97 31.41/10.18/25.75 37.34/12.02/30.11 46.40/21.53/37.21",
"num": null
}
}
}
}