ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2020.nlp4convai-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:39.033257Z"
},
"title": "How to Tame Your Data: Data Augmentation for Dialog State Tracking",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Summerville",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "California State Polytechnic University",
"location": {
"settlement": "Pomona"
}
},
"email": "[email protected]"
},
{
"first": "Jordan",
"middle": [],
"last": "Hashemi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "California State Polytechnic University",
"location": {
"settlement": "Pomona"
}
},
"email": "[email protected]"
},
{
"first": "James",
"middle": [],
"last": "Ryan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "California State Polytechnic University",
"location": {
"settlement": "Pomona"
}
},
"email": "[email protected]"
},
{
"first": "William",
"middle": [],
"last": "Ferguson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "California State Polytechnic University",
"location": {
"settlement": "Pomona"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Dialog State Tracking (DST) is a problem space in which the effective vocabulary is practically limitless. For example, the domain of possible movie titles or restaurant names is bound only by the limits of language. As such, DST systems often encounter out-ofvocabulary words at inference time that were never encountered during training. To combat this issue, we present a targeted data augmentation process, by which a practitioner observes the types of errors made on held-out evaluation data, and then modifies the training data with additional corpora to increase the vocabulary size at training time. Using this with a RoBERTa-based Transformer architecture, we achieve state-of-the-art results in comparison to systems that only mask trouble slots with special tokens. Additionally, we present a datarepresentation scheme for seamlessly retargeting DST architectures to new domains.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Dialog State Tracking (DST) is a problem space in which the effective vocabulary is practically limitless. For example, the domain of possible movie titles or restaurant names is bound only by the limits of language. As such, DST systems often encounter out-ofvocabulary words at inference time that were never encountered during training. To combat this issue, we present a targeted data augmentation process, by which a practitioner observes the types of errors made on held-out evaluation data, and then modifies the training data with additional corpora to increase the vocabulary size at training time. Using this with a RoBERTa-based Transformer architecture, we achieve state-of-the-art results in comparison to systems that only mask trouble slots with special tokens. Additionally, we present a datarepresentation scheme for seamlessly retargeting DST architectures to new domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dialog State Tracking (DST) is a common problem for modern task-oriented dialog systems that need to be capable of tracking user requests. Commonly, there is an ontology that defines slots that must be filled according to a user's utterances -e.g., a restaurant slot that is filled in with a restaurant name given by the user. A key problem for DSTs is that the values that fill a slot at inference may have never been encountered at training time (consider that the set of all possible restaurant names is bound only by the limits of language).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we address the problems of training on a domain with effectively limitless possible vocabulary, and aim to create a DST system capable of scaling to unseen vocabulary at inference. We do this by first utilizing a language model (LM) based Transformer that is capable of handling any possible input and output in a textual manner, letting the same exact architecture scale to new intents, slots, and slot values, with no modifications needed. Additionally, we present a practical data augmentation procedure for analyzing and addressing issues in the development of a DST system, leading to state-of-the-art performance.",
"cite_spans": [
{
"start": 242,
"end": 246,
"text": "(LM)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work in DST has taken a number of different approaches. The annual DST Challenge (DSTC) has undergone eight iterations (although from the sixth competition on, it has been the more broad Dialog System Technology Challenge) (Williams et al., 2013; Henderson et al., 2014a,b) . The M2M:Simulated Dialogue dataset for dialog state tracking has been addressed by a number of different approaches. Rastogi et al. (2017) used a bi-directional GRU (Chung et al., 2014) along with an oracle delexicalizer to generate a candidate list for slot filling. later used a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) without the oracle delexicalization to generate candidate lists for slot filling. use two bi-directional LSTMs -one at the utterance level, the other at the dialog level -to perform the dialog state tracking. However, this work is only tested on the simulated dataset Sim-GEN, meaning there is no comparison with the more challenging human crafted utterances contained in Sim-R and Sim-M.",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "(Williams et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 247,
"end": 273,
"text": "Henderson et al., 2014a,b)",
"ref_id": null
},
{
"start": 393,
"end": 414,
"text": "Rastogi et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 441,
"end": 461,
"text": "(Chung et al., 2014)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The closest approach to the one detailed in this paper is that of Chao and Lane (2019) . They used a system based off of BERT (Devlin et al., 2019) , but removed the language-model head and instead used two specialized heads: one that does per-slot utterance level classification to determine whether a given slot is active in the utterance or is the special dontcare token, and another per-slot head that predicts whether a token represents the beginning or end of the span for that type of slot. Our Figure 1 : A depiction of the language model based Transformer architecture used in this work. For each token in the user utterance (light blue), the model predicts what slot it belongs to (green or purple), if any, else other (white). A token for each of the slots is concatenated to the end of the user utterance (orange) and the model predicts whether that slot is active in the utterance (pink), not active (white), or should be set to the special dontcare token (not in this example).",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "Chao and Lane (2019)",
"ref_id": "BIBREF1"
},
{
"start": 126,
"end": 147,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 502,
"end": 510,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "model differs in that we do not need to alter the architecture of the model with specialized heads, and instead fine-tune the existing language model head. In their experimentation, they adjusted the level of slot-specific dropout using targeted feature dropout, first used by Xu and Sarikaya (2014) , where slots are replaced with a special [UNK] token. Our approach also differs in that instead of simply dropping out slots, we use the more nuanced method of targeted data augmentation.",
"cite_spans": [
{
"start": 277,
"end": 299,
"text": "Xu and Sarikaya (2014)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, data augmentation has been widely used for improving the robustness of dialog systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hou et al. (2018) used a LSTM-based sequence-to-sequence network to map from generic utterances (e.g., \"show me the <distance> <poitype>\") to a variety of different utterances (e.g., \"where is the <distance> <poitype>\" and \"can you find the <distance> <poitype> to me\"). This approach requires delexicalization and only alters grammatical structure, which is quite different from our approach which leaves grammatical structure alone, instead altering the non-delexicalized slot values. Quan and Xiong (2019) perform data augmentation via four different approaches: (1) replace words (excluding proper nouns, qualifiers, personal pronouns, and modal verbs) with their synonyms, (2) remove all stop words, (3) use existing neural machine-translation technology to translate from the source language to another and back again (similar to that of Hou et al. (2018) , except they do not train their own seq2seq network), and (4) use an existing paraphraser to paraphrase the utterance.",
"cite_spans": [
{
"start": 844,
"end": 861,
"text": "Hou et al. (2018)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our goal in this work is to to create a robust, readily extensible Dialog State Tracking system that requires minimal to no alteration of network architecture if the schema and/or domain of the dia-log task changes. For instance, imagine a system that is being developed for the restaurant domain under a schema in which a set of slots are specified: cuisine, price, location. Now imagine that later it becomes necessary to add a new slot: kidfriendliness. Instead of changing the architecture and retraining from scratch, we would prefer to be able to fine-tune the existing model with the new slot now present. Additionally, we incorporate targeted data augmentation to combat over-fitting when a domain has limited vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "To produce such a versatile DST system, we reformulate our data such that the problem is fully encoded textually, with no reliance on specialized output heads. Specifically, we carry out:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "1. Utterance-level slot activation. Is the slot active in the current utterance? If it is, does the slot map to the special dontcare token? That is, for each slot we predict one of slot, none, or dontcare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "2. Token-level slot filling. For each token in the input, is it used in a slot or is it other?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "To achieve (1), we modify the input utterance with an additional sequence. The additional sequence contains all of the slots present in the dialog schema. For instance, the sentence \"5 tickets to Transformers: Age of Extinction please.\" is concatenated with \"movie time theater date number\". Adding a new slot(s) is handled by simply concatenating to the list -e.g., if the above movie domain was extended to add restaurants \"cuisine restaurant location\" could be concatenated to the list of slots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "For (2), at the output level a slot is predicted for every token in the original utterance and a slot intent is predicted for every schema token that is concatenated to that utterance: Figure 1 for a more detailed illustration. Despite the two objectives, the loss is simply the Categorical Cross-Entropy loss over the entire (combined) sequence.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 194,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "The model aims to track the joint goal at each turn in the dialog, represented as all the slot values accumulated to that point. Rather than estimating the entire joint goal each turn, we predict changes to it -additions of slots, modifications to slot values -and maintain the joint goal by applying these changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Model Based Transformer",
"sec_num": "3.1"
},
{
"text": "There are a number of common issues in the datasets for these dialog tasks, including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "1. Small datasets. It is tedious and timeconsuming to annotate, gather, or handmodify believable dialogs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "2. Open classes. Given the open-ended nature of many of these tasks, training data cannot provide coverage of open classes (e.g., restaurant names or movie titles).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "To counteract these issues, researchers have proposed a number of different data augmentation schemes (see Section 2). At the outset of our study, we tried the 10% slot-specific dropout used by Chao and Lane (2019) , but our model still overfit to the training set. To combat this, we devised the following procedure:",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "Chao and Lane (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "1. Determine problem slots. Examine the incorrect predictions on the held-out evaluation set to determine whether there is a certain slot or intent that is not being predicted well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "2. Augment for problem slots. Find a corpus of values for that slot, and randomly insert a value from that corpus at training time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "In our work, we were using the Sim-R and Sim-M datasets , which are concerned with restaurant reservations and movie tickets respectively. We noticed that our system was nearly perfectly able to handle requests related to time, date, and number of people -slots whose values come from small structured sets -but was having difficulty with movie titles, restaurant names, and locations, even with the targeted 10% dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "We found corpora for movie names (42,306 movie titles found on Wikipedia as of 2013 (Bamman et al., 2013)), restaurant names (1445 humorous restaurant names (Samuel et al., 2016) ), and locations (2067 US settlement names from 1880 to 2010 (Samuel et al., 2016 )) which we then used to randomly replace the respective slots at training time at a rate of 50%.",
"cite_spans": [
{
"start": 157,
"end": 178,
"text": "(Samuel et al., 2016)",
"ref_id": "BIBREF13"
},
{
"start": 240,
"end": 260,
"text": "(Samuel et al., 2016",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "We note that our replacement has two major effects. (1) By randomly replacing with real values instead of simply masking, the model is capable of learning a wider variety of slot values and value structures, instead of simply relying on syntactic information surrounding the names. (2) By randomly replacing values, the dialog becomes more difficult to follow -akin to a user who is prone to changing their mind -and this forces the system to learn to track a user's (fickle) goals better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Augmentation",
"sec_num": "4"
},
{
"text": "As previously mentioned, we used the Sim-R and Sim-M datasets . This is because we found them to be of high quality (but with room for improvement), and there was a recent state-ofthe-art approach that used a similar Transformerbased architecture to compare against (Chao and Lane, 2019) . To assess the performance of the models, we use joint goal accuracy (Henderson et al., 2014a) , the standard metric for assessing DST systems. At each turn of dialog, the ground truth must be perfectly matched.",
"cite_spans": [
{
"start": 266,
"end": 287,
"text": "(Chao and Lane, 2019)",
"ref_id": "BIBREF1"
},
{
"start": 358,
"end": 383,
"text": "(Henderson et al., 2014a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For this specific work, we fine-tuned the RoBERTa masked language model of Liu et al. (2019) ; specifically, we used the Huggingface Transformers library (Wolf et al., 2019) . All models were trained with the ADAM optimizer with an initial learning rate of 5e \u2212 5, epsilon of 1e \u2212 8, a linear learning rate schedule over 20 epochs, and an attention mask rate of 15%.",
"cite_spans": [
{
"start": 75,
"end": 92,
"text": "Liu et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 154,
"end": 173,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We compare three approaches in the experiment. (1) RoBERTa-LM, the RoBERTa LM architecture with 10% slot-specific dropout; (2) RoBERTa-Separate, the RoBERTa LM architecture with 50% slot-specific replacement, with separate models trained on the Sim-M and Sim-R datasets; and (3) RoBERTa-Combined, the RoBERTa LM architecture with 50% slot-specific replacement, with a single model trained on the combined Sim-M and Sim-R datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "To assess our model, we compare against three previous systems. The first work by Rastogi et al. (2017) uses a bi-directional GRU along with an oracle delexicalizer to generate a candidate list for slot filling (DST+Oracle). The follow-on work of uses a bi-directional LSTM to build a set of candidates without delexicalization (DST+LU). Finally, the most recent approach, by Chao and Lane (2019) , builds off of the BERT Transformer architecture which achieved state-ofthe-art results (BERT-DST).",
"cite_spans": [
{
"start": 82,
"end": 103,
"text": "Rastogi et al. (2017)",
"ref_id": "BIBREF12"
},
{
"start": 376,
"end": 396,
"text": "Chao and Lane (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "5.1"
},
{
"text": "A summary of the results can be seen in Table 1 . We draw attention to the following results.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5.2"
},
{
"text": "(1) The language model based version of RoBERTa without data augmentation performs relatively poorly: it beats the non-Transformer based DST+LU at Sim-M but is worse at Sim-R, and is worse at both than BERT-DST. We did not perform a comprehensive hyperparameter search, so we are unable to discern if it is a critical failing of the model, or whether it was a result of our chosen hyperparameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5.2"
},
{
"text": "(2) The RoBERTa language model with data augmentation performed much better than the previous state-of-the-art -with 4.1% and 3.1% point gains respectively on Sim-M and Sim-R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5.2"
},
{
"text": "(3) Finally, we note that the language model that was trained jointly on both the movie and restaurant data is significantly better than the models trained separately. In part, we believe that this is because the datasets have a lot of overlap -e.g., requesting dates, times, etc. We also believe that due to the relatively small sizes of the datasets, the increase in the size helps combat overfitting in the model -the Sim-M is a smaller dataset than Sim-R (1364 turns vs. 3416) and commensurately, while there is a small gain in Sim-R performance, Sim-M performance is drastically improved (significant at p < 0.00001 with Fisher's exact test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "5.2"
},
{
"text": "We note that while we have achieved state-of-theart performance on the Sim-M and Sim-R datasets, there is certainly a possibility that a better choice of augmenting corpora could help the generality of the final model. For instance, the corpus of restaurant names was focused mostly on humorous names, such as \"A Brisket a Tasket\" and \"Et Tu New Brew.\" It will take further experimentation to determine if these names are more of a help (the model must be capable of handling a variety of names) or a hindrance (these names are not representative of most restaurant names). Furthermore, we note the US-centric bias found in the training and evaluation datasets for the location names, and the corresponding bias in our chosen corpus. Similarly, it is an open question as to whether a wider -less US-focused -corpus of location names would help. Certainly, for a system deployed in the world, a wider corpus would likely be of use, but for the purpose of achieving state-of-the-art test accuracy, it is unknown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5.3"
},
{
"text": "In this paper, we make two contributions. First, we introduce a process for a) examining the source of errors in Dialog State Tracking on held-out evaluation data, and b) correspondingly augmenting the dataset with corpora to vastly increase the vocabulary at training time. Like earlier work that selectively masked slot values, this prevents the system from overfitting to specific values found in the training data. Furthermore, however, it forces the system to learn a wider range of values, rather than syntactic features only, vastly improving the performance. Second, we do this in the context of a language model based Transformer, that due to the language-based nature of its representation -slots are simply represented as tokens concatenated to user utterances -is capable of transferring seamlessly between and working jointly on different datasets without the need to change the underlying architecture. In the future, we would like to address other forms of targeted data augmentation, addressing grammatical differences in addition to vocabulary modifications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The information provided in this document is derived from an effort sponsored by the Defense Advanced Research Projects Agency (DARPA) and Air Force Research Laboratory (AFRL), and awarded to Raytheon BBN Technologies under contract number FA865018-C-7885.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning latent personas of film characters",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Oconnor",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "352--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman, Brendan OConnor, and Noah A Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352-361.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT-DST: Scalable end-to-end dialogue state tracking with bidirectional encoder representations from transformer",
"authors": [
{
"first": "Guan-Lin",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2019,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guan-Lin Chao and Ian Lane. 2019. BERT-DST: Scal- able end-to-end dialogue state tracking with bidirec- tional encoder representations from transformer. In INTERSPEECH.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Empirical evaluation of gated recurrent neural networks on sequence modeling",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS 2014 Workshop on Deep Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS 2014 Workshop on Deep Learning, December 2014.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The second dialog state tracking challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "263--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263-272.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The third dialog state tracking challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "IEEE Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "324--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Jason D Williams. 2014b. The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technol- ogy Workshop (SLT), pages 324-329. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Sequence-to-sequence data augmentation for dialogue language understanding",
"authors": [
{
"first": "Yutai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1234--1245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1234-1245, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.06512"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Gokhan Tur, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end train- able task-oriented dialogue systems. arXiv preprint arXiv:1804.06512.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Effective data augmentation approaches to end-to-end task-oriented dialogue",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Asian Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Quan and Deyi Xiong. 2019. Effective data aug- mentation approaches to end-to-end task-oriented di- alogue. In 2019 International Conference on Asian Language Processing (IALP 2019).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-task learning for joint language understanding and dialogue state tracking",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "376--384",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5045"
]
},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Raghav Gupta, and Dilek Hakkani- Tur. 2018. Multi-task learning for joint language understanding and dialogue state tracking. In Pro- ceedings of the 19th Annual SIGdial Meeting on Dis- course and Dialogue, pages 376-384, Melbourne, Australia. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Scalable multi-domain dialogue state tracking",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)",
"volume": "",
"issue": "",
"pages": "561--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Dilek Hakkani-T\u00fcr, and Larry Heck. 2017. Scalable multi-domain dialogue state track- ing. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 561- 568. IEEE.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bad news: An experiment in computationally assisted performance",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Ryan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Summerville",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Mateas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wardrip-Fruin",
"suffix": ""
}
],
"year": 2016,
"venue": "International Conference on Interactive Digital Storytelling",
"volume": "",
"issue": "",
"pages": "108--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Samuel, James Ryan, Adam J Summerville, Michael Mateas, and Noah Wardrip-Fruin. 2016. Bad news: An experiment in computationally as- sisted performance. In International Conference on Interactive Digital Storytelling, pages 108-120. Springer.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building a conversational agent overnight with dialogue self-play",
"authors": [
{
"first": "Pararth",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Neha",
"middle": [],
"last": "Nayak",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.04871"
]
},
"num": null,
"urls": [],
"raw_text": "Pararth Shah, Dilek Hakkani-T\u00fcr, Gokhan T\u00fcr, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The dialog state tracking challenge",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Raux",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the SIGDIAL 2013 Conference",
"volume": "",
"issue": "",
"pages": "404--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Williams, Antoine Raux, Deepak Ramachandran, and Alan Black. 2013. The dialog state tracking challenge. In Proceedings of the SIGDIAL 2013 Conference, pages 404-413.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Targeted feature dropout for robust slot filling in natural language understanding",
"authors": [
{
"first": "Puyang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2014,
"venue": "Fifteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Puyang Xu and Ruhi Sarikaya. 2014. Targeted feature dropout for robust slot filling in natural language un- derstanding. In Fifteenth Annual Conference of the International Speech Communication Association.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "\"5[number] tickets to Transformers:[movie] Age[movie] of [movie] Extinction[movie] please. <s>movie[slot] time[none] theater[ none] date[none] number[slot]' See"
}
}
}
}