ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:34:43.461555Z"
},
"title": "Service registration chatbot: collecting and comparing dialogues from AMT workers and service's users",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Molteni",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Mittul",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Juho",
"middle": [],
"last": "Leinonen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Katri",
"middle": [],
"last": "Leino",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Aalto University",
"location": {
"country": "Finland"
}
},
"email": ""
},
{
"first": "Emanuele",
"middle": [
"Della"
],
"last": "Valle",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Politecnico of Milano",
"location": {
"country": "Italy"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP tasks. Nevertheless, crowdsourced data is noisy by nature; the source is often unknown and additional validation work is performed to guarantee the dataset's quality. In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service. We observe that workers hired on crowdsourcing platforms produce lexically poorer and less diverse rewrites than service users engaged voluntarily. Notably enough, on dialogue clarity and optimality, the two paraphrase sources' human-perceived quality does not differ significantly. Furthermore, for the chatbot service, the combined crowdsourced data is enough to train a transformer-based Natural Language Generation (NLG) system. To enable similar services, we also release tools for collecting data and training the dialogue-act-based transformer-based NLG module 1 .",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Crowdsourcing is the go-to solution for data collection and annotation in the context of NLP tasks. Nevertheless, crowdsourced data is noisy by nature; the source is often unknown and additional validation work is performed to guarantee the dataset's quality. In this article, we compare two crowdsourcing sources on a dialogue paraphrasing task revolving around a chatbot service. We observe that workers hired on crowdsourcing platforms produce lexically poorer and less diverse rewrites than service users engaged voluntarily. Notably enough, on dialogue clarity and optimality, the two paraphrase sources' human-perceived quality does not differ significantly. Furthermore, for the chatbot service, the combined crowdsourced data is enough to train a transformer-based Natural Language Generation (NLG) system. To enable similar services, we also release tools for collecting data and training the dialogue-act-based transformer-based NLG module 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Task-specific neural dialogue models demand highquality annotated dialogue data. Unfortunately, gathering human-generated and annotated dialogues is a costly and time-consuming task. Easily accessible sources, like social-network feeds and online forums, are cursed by systematic problems such as extra-linguistic annotations, irregular turntaking, and the lack of a standard format leading to an intense pre-processing phase. Even so, models trained with this type of data might not work well in a more natural domain (Leino et al., 2020) . In recent times, thanks to online platforms like Amazon Mechanical Turk (AMT) 2 , crowdsourcing has become the most popular solution to tackle the problem of manually generating and annotating written dialogues.",
"cite_spans": [
{
"start": 519,
"end": 539,
"text": "(Leino et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, as a small business, minimizing such added costs while automating user-based workflows is essential. In this work, we consider leveraging voluntary submissions by business users for creating a chatbot.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For the chatbot service, we consider a new class of broadly diffused tasks that we name Service Registration Tasks (SRTs), which involves the domainagnostic act of registering to an online service. As a use case, we work with SiirtoSoitto 3 to provide users with a chatbot for service registration. Si-irtoSoitto is a free online service offered to the city of Helsinki that notifies users about scheduled roadworks and imminent car towings. We employ a dialogue templating method called Machine Talking to Machines (M2M) (Shah et al., 2018b,a) . It simulates the interaction between a user and system to automatically generate templates, which are then paraphrased by AMT and service users.",
"cite_spans": [
{
"start": 522,
"end": 544,
"text": "(Shah et al., 2018b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we make the following contributions. 1) We release the data collection tools to the public, including an integration with popular instant messaging platforms to engage with service's users (Section 4) . 2) We analyze and compare the data collected via AMT workers and service's users in an empirical and human evaluation (Section 5).",
"cite_spans": [
{
"start": 203,
"end": 214,
"text": "(Section 4)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show the usefulness of collected data by training a dialogue act induced transformer-based language generation module (Section 6). We also release the module's code publicly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3)",
"sec_num": null
},
{
"text": "Here, we focus on a class of tasks named Service Registration Tasks that consists of registering to a general online service. This human-machine interaction is characterized by the collection and val-idation of information and preferences from the user. As a specific instance of this class of tasks, we picked the use case of SiirtoSoitto, an online service that warns and notifies vehicle owners in the city of Helsinki about road maintenance and imminent towings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Service Registration Task (SRT)",
"sec_num": "2"
},
{
"text": "For chatbot development, we employ the Machine Talking to Machines (M2M) framework (Shah et al., 2018b,a) to setup the annotated data collection. Conceived as being domain-independent, M2M generates dialogues centered on completing a specific task.",
"cite_spans": [
{
"start": 83,
"end": 105,
"text": "(Shah et al., 2018b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machines Talking to Machines (M2M)",
"sec_num": "3"
},
{
"text": "The M2M consists of four major steps. 1), the developer provides the task-specific knowledge used by the system. It can be seen as a collection of all the units of information exchanged during the dialogue. 2) Given a task specification, a simulated interaction of a user and the system generates sequences of dialogue acts exhaustively. The output sequences enclose the semantic content of the dialogue. The user is modeled as an agenda-based user simulator (Schatzmann et al., 2007) while the system is designed as a Mealy machine. This process is also called self-play, where a simulated user interacts with the system. A generated example is shown in the first row of Table 1 . 3) Using the semantic parses, we can then build dialogue templates using a simple domain grammar. The templates are slightly unnatural computer-generated dialogue utterances paired with their semantic representation in the form of dialogue acts (second row of Table 1 ). 4) Finally, the dialogue templates enter a paraphrasing phase where crowdsource workers provide natural and contextual rewrites of the machine-generated sentences (last row of Table 1 ).",
"cite_spans": [
{
"start": 459,
"end": 484,
"text": "(Schatzmann et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 672,
"end": 679,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 942,
"end": 949,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1129,
"end": 1136,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Machines Talking to Machines (M2M)",
"sec_num": "3"
},
{
"text": "Our SRT is characterized by exchanging information such as telephone numbers, license plates, areas of interest, and the acceptance of terms and conditions. These characteristics form the taskspecification used to initialize the M2M's first step. A dialogue scenario is sampled by assigning a valid or invalid value to each entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying M2M to SRT",
"sec_num": "4"
},
{
"text": "Through self-play, we can generate sequences of dialogue acts until the goal of registering is reached or some invalid state is encountered (e.g., the user provides invalid values). Next, we build a simple rule-based domain grammar that converts the anno- tated sequences into templates, first turning them into syntactic skeletons with proper punctuation and conjunctions, and then substituting the entity values with custom terms to increase readability. In the next step, the same dialogues are used to set up a paraphrasing task on AMT and on the rulebased chatbot that makes SiirtoSoitto available to the public. Chatbot users are asked to participate voluntarily in an experimental task. They are presented with dialogue turns to rewrite sequentially on their preferred instant messaging application. A quick manual quality check removed roughly 25% of all AMT feedback due to a lack of compliance with the instructions. In contrast only 10% of Si-irtoSoitto users failed to understand their task and produced unusable data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying M2M to SRT",
"sec_num": "4"
},
{
"text": "In the above process, instead of annotating natural utterances, we are building dialogues upon annotations. The automatic generation of the outlines guarantees greater diversity and explores all the relevant paths conceived by the task designer. Finally, employing human writers ensures the naturalness of the utterances, and the variety is boosted by asking them to rephrase highly generic machinegenerated sentences. This reverse processing guarantees the quality of the semantic annotations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying M2M to SRT",
"sec_num": "4"
},
{
"text": "In Table 4 , we present the statistics of the data collected by employing AMT and service users (SiirtoSoitto). In each case, we ran the paraphrasing step over multiple sessions across five days. We presented the same dialogue set to both the groups to improve the comparability among generated paraphrases. Then, we performed a human evaluation to validate paraphrase quality and removed any spurious paraphrases. We were able to collect 98 and 83 dialogues via AMT and Siir-toSoitto users, respectively. With a larger number of dialogues and turns, AMT workers produced more data than SiirtoSoitto users.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Applying M2M to SRT",
"sec_num": "4"
},
{
"text": "In terms of effort, we set up the paraphrasing task on AMT and the chatbot service in similar amounts of time. For the chatbot service, we introduced some additional conversational interaction and integrated the M2M-generated templates into the service. For the AMT setup, we had to design and implement the paraphrasing task in AMT task's single HTML page and import batches of dialogue templates by hand. From a monetary standpoint, as we recruited users voluntarily, paraphrasing with chatbot users did not lead to any costs. On AMT, we spent a total of 63$ which includes the cost for each single task (0.5$) and the platform fees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying M2M to SRT",
"sec_num": "4"
},
{
"text": "In this section, we compare the data collected via the two different crowdsourcing sources. We compare them quantitatively based on the lexical richness and language diversity. We also ask human evaluators to grade dialogues qualitatively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "Lexical rich and diverse paraphrases can allow the chatbot to feel more real and natural. In effect, it helps the users to have a more satisfying experience even in a simple task. Hence, having lexical rich and diverse data is desirable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "Lexical richness is calculated as the ratio between unique n-grams and total tokens per collection source (Hout and Vermeer, 2007) . Interestingly, even with a lower dialogue count, the SiirtoSoitto dataset presents a higher lexical richness than the AMT dataset. This effect indicates greater language variety associated with expert user rewrites. Moreover, higher bigram and trigram lexical richness for SiirtoSoitto dataset than AMT datasets highlights a greater construct variety in SiirtoSoitto dataset. where the SiirtoSoitto users rewrite with more constructs than AMT workers.",
"cite_spans": [
{
"start": 106,
"end": 130,
"text": "(Hout and Vermeer, 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "Diversity is measured by using two metrics: Term Frequency -Inverse Document Frequency (TF-IDF) diversity metric (Tdiv) and Jaccard distance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "Tdiv is the sum of TF-IDF scores over n-grams (n \u2264 3) in a document (D), as defined below. TF-IDF reflects the importance of an n-gram. n-grams with lower frequency in the collected data have higher IDFs. Thus, the Tdiv metric denotes the extent of diversity of an expression in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "T div(R) = N n=1 n-gram\u2208R T F -IDF (n-gram) Vn Vn = 1 |D| R\u2208D n-gram\u2208R T F -IDF (n-gram)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "The Tdiv score for a sentence has little meaning, as it needs to be compared with Tdiv scores of sentences that entail the same semantic content. Given two rewrites for the same turn, one from the AMT dataset and one from SiirtoSoitto, if the latter has a higher Tdiv score, it is considered having more vibrant expressions than the former. For an overall comparison, we keep track these wins for each type of dataset per turn. We observe that SiirtoSoitto wins almost two out of three times, thus having paraphrases with richer expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "The Jaccard distance is a metric based on the Jaccard similarity coefficient that measures the dissimilarity between two finite sets of elements, in this case, the words that make up a sentence. This coefficient has been used as a proxy of the effort put in by the crowdsource to write paraphrases with different wordings from the proposed templates. In terms of average Jaccard distance, SiirtoSoitto (0.490) users outperform Amazon Turkers (0.432). This effect is exemplified by the example shown in Table 3 , where SiirtoSoitto users use more words than AMT workers.",
"cite_spans": [],
"ref_spans": [
{
"start": 502,
"end": 509,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Lexical Richness and Diversity",
"sec_num": "5.1"
},
{
"text": "Human evaluators assessed the perceived quality of the generated and paraphrased dialogues. Each dialogue was judged for four qualities: naturalness, clearness, grammaticality, and optimality. Naturalness indicates how well the sentences resemble typical human expressions. Clearness refers to the extent to which the meaning conveyed by the dialogue turns is easily understandable. Grammaticality reflects the absence of misspellings or badly formatted sentences. Finally, optimality refers to how quickly the proposed rewrites seem to go straight to the point. The scores were provided on a scale of one to five, with one representing the lowest quality and five being the highest. Table 4 details the score average across the twenty evaluators. Both AMT-and SiirtoSoitto-based datasets were judged to be similar from a human standpoint, as their differences were not significant. Also, both datasets scored highly on the four dimensions attesting the quality of the data collected.",
"cite_spans": [],
"ref_spans": [
{
"start": 684,
"end": 691,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Qualitative evaluation",
"sec_num": "5.2"
},
{
"text": "We train a neural model for Natural Language Generation (NLG) to observe the effectiveness of the collected data. The neural model is a Transformer network (Vaswani et al., 2017) that converts the next dialogue acts into an output sentence. For the NLG use case, our Transformer architecture includes two separate encoders. The first encoder inputs a sequence of dialogue acts capturing the semantic meaning of the sentence that needs to be generated. The second encoder inputs the user's turn. As a single person writes each paraphrase of an entire dialogue, the person's style is reflected in both user and system turns. Intuitively, the second encoder employs the user's style to adapt the generated utterance to the user's persona. Our trans- former implementation is trained with the Noam optimizer on negative log-likelihood loss (Vaswani et al., 2017) . Encoders and decoder are characterized by three identical replicated blocks, 16 attention heads and a dropout rate of 0.1. Both the first encoder and decoder have 1024 hidden nodes while the second encoder uses 256 hidden nodes. We release our dialogue-act based transformer implementation with this work 4 . Figure 1 showcases some of the sentences generated with the NLG module. It also includes an instance in which the same sequence of input dialogue acts results in different system output sentences given the different user's utterances.",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 836,
"end": 858,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 1170,
"end": 1178,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transformer-based language generator",
"sec_num": "6"
},
{
"text": "In our work, we applied M2M via two types of crowdsourcing methods. Earlier work (Kittur et al., 2008) has shown that AMT workers achieve significantly lower performances when the degree of experience and contextual knowledge is important. However, their performance improves with a more guided task structure. In our experiment, the service's users already had the background knowledge necessary for the task. Moreover, considering the generated dialogue's lexical richness and diversity, their paraphrases were ranked higher than AMT workers. However, at a qualitative level, both types of paraphrase ranked similarly.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Kittur et al., 2008)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "Prior work (Walker et al., 2018; Budzianowski et al., 2018) has been concerned about the unnatural process of dialogue generation in the M2M approach. In our perspective, this issue affects scenarios where a simulated user cannot model the ambiguities of a real user, but for a simplistic SRT use case, we disregard this issue.",
"cite_spans": [
{
"start": 11,
"end": 32,
"text": "(Walker et al., 2018;",
"ref_id": "BIBREF14"
},
{
"start": 33,
"end": 59,
"text": "Budzianowski et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "For creating the NLG module, we focus on the generation of surface expression based on sequences of dialogue acts. Similarly, quite a few prior work (Stent, 2001; Wen et al., 2015; Varshney et al., 2020; Chen et al., 2019; Nayak et al., 2017) have employed semantic structures to generate dialogue utterances. Stent (2001) leveraged custom dialogue acts to implement a rule-based utterance generator as part of a bigger modular conversational system. Recently, LSTMbased machine translation models (Wen et al., 2015; Nayak et al., 2017) and Transformers Varshney et al., 2020; Chen et al., 2019) have also been successfully explored in NLG tasks for open-domain and task-specific dialogue systems. For both open-domain and task-specific modules, large corpora of annotations are required for training the modules. In contrast, our work considers a simple SRT where even small amounts of crowdsourced data can help build good models. Additionally, unlike most of the prior work, we release our NLG module code to the public.",
"cite_spans": [
{
"start": 149,
"end": 162,
"text": "(Stent, 2001;",
"ref_id": "BIBREF11"
},
{
"start": 163,
"end": 180,
"text": "Wen et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 181,
"end": 203,
"text": "Varshney et al., 2020;",
"ref_id": null
},
{
"start": 204,
"end": 222,
"text": "Chen et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 223,
"end": 242,
"text": "Nayak et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 310,
"end": 322,
"text": "Stent (2001)",
"ref_id": "BIBREF11"
},
{
"start": 498,
"end": 516,
"text": "(Wen et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 517,
"end": 536,
"text": "Nayak et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 554,
"end": 576,
"text": "Varshney et al., 2020;",
"ref_id": null
},
{
"start": 577,
"end": 595,
"text": "Chen et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "7"
},
{
"text": "Collecting annotated datasets for NLG is a challenging task which sees crowdsourcing as the preferred solution to balance costs and time. In this work, we considered voluntarily engaging Siir-toSoitto's users to contribute towards a paraphrasing task for building a chatbot. Our findings suggest that engaging SiirtoSoitto users might produce more diverse and lexically rich results than engaging AMT workers empirically whereas, from a qualitative standpoint, both the datasets are similar for a simple service registration task. We can obtain similar amounts of data while running the data collection effort employing both sets of users for a comparable time. More importantly, through this process, we were able to reduce our costs of collecting data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Additionally, in simple use cases like the SRT, this data are enough to build a transformer-based NLG module conditioned on dialogue acts. To support other small businesses, we make our data collection pipeline and code to train the transformerbased NLG module public.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "https://github.com/Molteh/M2M 2 https://www.mturk.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.siirtosoitto.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Molteh/M2M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Twenty Hexagons Oy, the company behind SiirtoSoitto service, which provided us the opportunity to work with their infrastructure and engage with their user base. We also thank anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "MultiWOZ -a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling",
"authors": [
{
"first": "Pawe\u0142",
"middle": [],
"last": "Budzianowski",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Bo-Hsiang",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "I\u00f1igo",
"middle": [],
"last": "Casanueva",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Milica",
"middle": [],
"last": "Osman Ramadan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5016--5026",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1547"
]
},
"num": null,
"urls": [],
"raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Ga\u0161i\u0107. 2018. MultiWOZ -a large-scale multi-domain wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semantically conditioned dialog response generation via hierarchical disentangled self-attention",
"authors": [
{
"first": "Wenhu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Xifeng",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenhu Chen, Jianshu Chen, Pengda Qin, Xifeng Yan, and William Yang Wang. 2019. Semantically con- ditioned dialog response generation via hierarchical disentangled self-attention.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Comparing measures of lexical richness",
"authors": [
{
"first": "Roeland",
"middle": [],
"last": "Hout",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Vermeer",
"suffix": ""
}
],
"year": 2007,
"venue": "Modelling and assessing vocabulary knowledge",
"volume": "",
"issue": "",
"pages": "93--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roeland Hout and Anne Vermeer. 2007. Comparing measures of lexical richness. In: H. Daller, J. Milton J. Treffers-Daller (eds.), Modelling and assessing vocabulary knowledge (93-116). Cambridge: Cam- bridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Crowdsourcing user studies with mechanical turk",
"authors": [
{
"first": "Aniket",
"middle": [],
"last": "Kittur",
"suffix": ""
},
{
"first": "Ed",
"middle": [
"H"
],
"last": "Chi",
"suffix": ""
},
{
"first": "Bongwon",
"middle": [],
"last": "Suh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08",
"volume": "",
"issue": "",
"pages": "453--456",
"other_ids": {
"DOI": [
"10.1145/1357054.1357127"
]
},
"num": null,
"urls": [],
"raw_text": "Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Hu- man Factors in Computing Systems, CHI '08, page 453-456, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Finchat: Corpus and evaluation setup for finnish chat conversations on everyday topics",
"authors": [
{
"first": "Katri",
"middle": [],
"last": "Leino",
"suffix": ""
},
{
"first": "Juho",
"middle": [],
"last": "Leinonen",
"suffix": ""
},
{
"first": "Mittul",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.08315"
]
},
"num": null,
"urls": [],
"raw_text": "Katri Leino, Juho Leinonen, Mittul Singh, Sami Virpi- oja, and Mikko Kurimo. 2020. Finchat: Corpus and evaluation setup for finnish chat conversations on ev- eryday topics. arXiv preprint arXiv:2008.08315.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A transformer-based variational autoencoder for sentence generation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Liu and G. Liu. 2019. A transformer-based varia- tional autoencoder for sentence generation. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-7.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Generating diverse and descriptive image captions using visual paraphrases",
"authors": [
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE/CVF International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "4239--4248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Liu, J. Tang, X. Wan, and Z. Guo. 2019. Generating diverse and descriptive image captions using visual paraphrases. In 2019 IEEE/CVF International Con- ference on Computer Vision (ICCV), pages 4239- 4248.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "To plan or not to plan? discourse planning in slot-value informed sequence to sequence models for language generation",
"authors": [
{
"first": "Neha",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "3339--3343",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2017-1525"
]
},
"num": null,
"urls": [],
"raw_text": "Neha Nayak, Dilek Hakkani-Tur, Marilyn Walker, and Larry Heck. 2017. To plan or not to plan? dis- course planning in slot-value informed sequence to sequence models for language generation. pages 3339-3343.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Agenda-based user simulation for bootstrapping a POMDP dialogue system",
"authors": [
{
"first": "Jost",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Weilhammer",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers",
"volume": "",
"issue": "",
"pages": "149--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a POMDP dia- logue system. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguis- tics; Companion Volume, Short Papers, pages 149- 152, Rochester, New York. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bootstrapping a neural conversational agent with dialogue self-play, crowdsourcing and on-line reinforcement learning",
"authors": [
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "3",
"issue": "",
"pages": "41--51",
"other_ids": {
"DOI": [
"10.18653/v1/N18-3006"
]
},
"num": null,
"urls": [],
"raw_text": "Pararth Shah, Dilek Hakkani-T\u00fcr, Bing Liu, and Gokhan T\u00fcr. 2018a. Bootstrapping a neural conver- sational agent with dialogue self-play, crowdsourc- ing and on-line reinforcement learning. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 3 (Industry Papers), pages 41-51, New Orleans - Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Building a conversational agent overnight with dialogue self-play",
"authors": [
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Neha",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Larry",
"middle": [
"P"
],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pararth Shah, Dilek Hakkani-T\u00fcr, G\u00f6khan T\u00fcr, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry P. Heck. 2018b. Building a conversational agent overnight with dialogue self-play. CoRR, abs/1801.04871.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dialogue Systems as Conversational Partners: Applying conversation acts theory to natural language generation for task-oriented mixed-initiative spoken dialogue",
"authors": [
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amanda Stent. 2001. Dialogue Systems as Conver- sational Partners: Applying conversation acts the- ory to natural language generation for task-oriented mixed-initiative spoken dialogue. Ph.D. thesis.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Abhijith Athreya Mysore Gopinath, and Pushpak Bhattacharyya. 2020. Natural language generation using transformer network in an open-domain setting",
"authors": [
{
"first": "Deeksha",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Asif",
"middle": [],
"last": "Ekbal",
"suffix": ""
},
{
"first": "Ganesh",
"middle": [],
"last": "Prasad Nagaraja",
"suffix": ""
},
{
"first": "Mrigank",
"middle": [],
"last": "Tiwari",
"suffix": ""
}
],
"year": null,
"venue": "Natural Language Processing and Information Systems",
"volume": "",
"issue": "",
"pages": "82--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deeksha Varshney, Asif Ekbal, Ganesh Prasad Na- garaja, Mrigank Tiwari, Abhijith Athreya Mysore Gopinath, and Pushpak Bhattacharyya. 2020. Nat- ural language generation using transformer network in an open-domain setting. In Natural Language Processing and Information Systems, pages 82-93, Cham. Springer International Publishing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring conversational language generation for rich content about hotels",
"authors": [
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": ""
},
{
"first": "Albry",
"middle": [],
"last": "Smither",
"suffix": ""
},
{
"first": "Shereen",
"middle": [],
"last": "Oraby",
"suffix": ""
},
{
"first": "Vrindavan",
"middle": [],
"last": "Harrison",
"suffix": ""
},
{
"first": "Hadar",
"middle": [],
"last": "Shemtov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marilyn Walker, Albry Smither, Shereen Oraby, Vrin- davan Harrison, and Hadar Shemtov. 2018. Explor- ing conversational language generation for rich con- tent about hotels. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), Miyazaki, Japan. Eu- ropean Languages Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semantically conditioned LSTM-based natural language generation for spoken dialogue systems",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Mrk\u0161i\u0107",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1711--1721",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1199"
]
},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Ga\u0161i\u0107, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721, Lisbon, Portugal. Association for Com- putational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "A single-turn sample showcasing the M2M generation process.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF3": {
"text": "Summary of the quantitative evaluation.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"num": null,
"content": "<table><tr><td>displays an example this effect</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "Example of rewrite collected from AMT and SiirtoSoitto chatbot service users.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
},
"TABREF7": {
"text": "Results of human evaluation on the collected dialogues. Numbers shows average scores of per dialogue grading. Standard deviation in brackets.",
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table"
}
}
}
}