ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-main.35.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:13:22.411739Z"
},
"title": "Intent Detection with WikiHow",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
},
{
"first": "Qing",
"middle": [],
"last": "Lyu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Modern task-oriented dialog systems need to reliably understand users' intents. Intent detection is even more challenging when moving to new domains or new languages, since there is little annotated data. To address this challenge, we present a suite of pretrained intent detection models which can predict a broad range of intended goals from many actions because they are trained on wikiHow, a comprehensive instructional website. Our models achieve state-of-the-art results on the Snips dataset, the Schema-Guided Dialogue dataset, and all 3 languages of the Facebook multilingual dialog datasets. Our models also demonstrate strong zero-and few-shot performance, reaching over 75% accuracy using only 100 training examples in all datasets. 1",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Modern task-oriented dialog systems need to reliably understand users' intents. Intent detection is even more challenging when moving to new domains or new languages, since there is little annotated data. To address this challenge, we present a suite of pretrained intent detection models which can predict a broad range of intended goals from many actions because they are trained on wikiHow, a comprehensive instructional website. Our models achieve state-of-the-art results on the Snips dataset, the Schema-Guided Dialogue dataset, and all 3 languages of the Facebook multilingual dialog datasets. Our models also demonstrate strong zero-and few-shot performance, reaching over 75% accuracy using only 100 training examples in all datasets. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Task-oriented dialog systems like Apple's Siri, Amazon Alexa, and Google Assistant have become pervasive in smartphones and smart speakers. To support a wide range of functions, dialog systems must be able to map a user's natural language instruction onto the desired skill or API. Performing this mapping is called intent detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intent detection is usually formulated as a sentence classification task. Given an utterance (e.g. \"wake me up at 8\"), a system needs to predict its intent (e.g. \"Set an Alarm\"). Most modern approaches use neural networks to jointly model intent detection and slot filling (Xu and Sarikaya, 2013; Liu and Lane, 2016; Goo et al., 2018; . In response to a rapidly growing range of services, more attention has been given to zero-shot intent detection (Ferreira et al., 2015a,b; Yazdani and Henderson, 2015; Chen et al., 2016; Kumar et al., 2017; Gangadharaiah and 1 The data and models are available at https:// github.com/zharry29/wikihow-intent. Narayanaswamy, 2019) . While most existing research on intent detection proposed novel model architectures, few have attempted data augmentation. One such work (Hu et al., 2009) showed that models can learn much knowledge that is important for intent detection from massive online resources such as Wikipedia.",
"cite_spans": [
{
"start": 273,
"end": 296,
"text": "(Xu and Sarikaya, 2013;",
"ref_id": "BIBREF18"
},
{
"start": 297,
"end": 316,
"text": "Liu and Lane, 2016;",
"ref_id": "BIBREF11"
},
{
"start": 317,
"end": 334,
"text": "Goo et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 449,
"end": 475,
"text": "(Ferreira et al., 2015a,b;",
"ref_id": null
},
{
"start": 476,
"end": 504,
"text": "Yazdani and Henderson, 2015;",
"ref_id": "BIBREF19"
},
{
"start": 505,
"end": 523,
"text": "Chen et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 524,
"end": 543,
"text": "Kumar et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 544,
"end": 563,
"text": "Gangadharaiah and 1",
"ref_id": null
},
{
"start": 646,
"end": 666,
"text": "Narayanaswamy, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 806,
"end": 823,
"text": "(Hu et al., 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a pretraining task based on wiki-How, a comprehensive instructional website with over 110,000 professionally edited articles. Their topics span from common sense such as \"How to Download Music\" to more niche tasks like \"How to Crochet a Teddy Bear.\" We observe that the header of each step in a wikiHow article describes an action and can be approximated as an utterance, while the title describes a goal and can be seen as an intent. For example, \"find good gas prices\" in the article \"How to Save Money on Gas\" is similar to the utterance \"where can I find cheap gas?\" with the intent \"Save Money on Gas.\" Hence, we introduce a dataset based on wikiHow, where a model predicts the goal of an action given some candidates. Although most of wikiHow's domains are far beyond the scope of any present dialog system, models pretrained on our dataset would be robust to emerging services and scenarios. Also, as wikiHow is available in 18 languages, our pretraining task can be readily extended to multilingual settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Using our pretraining task, we fine-tune transformer language models, achieving state-of-the-art results on the intent detection task of the Snips dataset (Coucke et al., 2018) , the Schema-Guided Dialog (SGD) dataset (Rastogi et al., 2019) , and all 3 languages (English, Spanish, and Thai) of the Facebook multilingual dialog datasets (Schuster et al., 2019) , with statistically significant improvements. As our accuracy is close to 100% on all these datasets, we further experiment with zero-or few-shot settings. Our models achieve over 70% accuracy with no in-domain training data on Snips and SGD, and over 75% with only 100 training examples on all datasets. This highlights our models' ability to quickly adapt to new utterances and intents in unseen domains.",
"cite_spans": [
{
"start": 155,
"end": 176,
"text": "(Coucke et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 240,
"text": "(Rastogi et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 337,
"end": 360,
"text": "(Schuster et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We crawl the wikiHow website in English, Spanish, and Thai (the languages were chosen to match those in the Facebook multilingual dialog datasets). We define the goal of each artcle as its title stripped of the prefix \"How to\" (and its equivalent in other languages). We extract a set of steps for each article, by taking the bolded header of each paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": "2.1"
},
{
"text": "A wikiHow article's goal can approximate an intent, and each step in it can approximate an associated utterance. We formulate the pretraining task as a 4choose-1 multiple choice format: given a step, the model infers the correct goal among 4 candidates. For example, given the step \"let check-in agents and flight attendants know if it's a special occasion\" and the candidate goals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WikiHow Pretraining Dataset",
"sec_num": "2.2"
},
{
"text": "A. Get Upgraded to Business Class B. Change a Flight Reservation C. Check Flight Reservations D. Use a Discount Airline Broker the correct goal would be A. This is similar to intent detection, where a system is given a user utterance and then must select a supported intent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WikiHow Pretraining Dataset",
"sec_num": "2.2"
},
{
"text": "We create intent detection pretraining data using goal-step pairs from each wikiHow article. Each article contributes at least one positive goal-step pair. However, it is challenging to sample negative candidate goals for a given step. There are two reasons for this. First, random sampling of goals correctly results in true negatives, but they tend to be so distant from the positive goal that the classification task becomes trivial and the model does not learn sufficiently. Second, if we sample goals that are similar to the positive goal, then they might not be true negatives, since there are many steps in wikiHow often with overlapping goals. To sample high-quality negative training instances, we start with the correct goal and search in its article's \"related articles\" section for an article whose title has the least lexical overlap with the current goal. We recursively do this until we have enough candidates. Empirically, examples created this way are mostly clean, with an example shown above. We select one positive goal-step pair from each article by picking its longest step. In total, our wikiHow pretraining datasets have 107,298 English examples, 64,803 Spanish examples, and 6,342 Thai examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "WikiHow Pretraining Dataset",
"sec_num": "2.2"
},
{
"text": "We fine-tune a suite of off-the-shelf language models pretrained on our wikiHow data, and evaluate them on 3 major intent detection benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We fine-tune a pretrained RoBERTa model (Liu et al., 2019) for the English datasets and a pretrained XLM-RoBERTa model (Conneau et al., 2019) for the multilingual datasets. We cast the instances of the intent detection datasets into a multiple-choice format, where the utterance is the input and the full set of intents are the possible candidates, consistent with our wikiHow pretraining task. For each model, we append a linear classification layer with cross-entropy loss to calculate a likelihood for each candidate, and output the candidate with the maximum likelihood.",
"cite_spans": [
{
"start": 40,
"end": 58,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 119,
"end": 141,
"text": "(Conneau et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "For each intent detection dataset in any language, we consider the following settings: +in-domain (+ID): a model is only trained on the dataset's in-domain training data; +wikiHow +in-domain (+WH+ID): a model is first trained on our wikiHow data in the corresponding language, and then trained on the dataset's indomain training data; +wikiHow zero-shot (+WH 0-shot): a model is trained only on our wikiHow data in the corresponding language, and then applied directly to the dataset's evaluation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "For non-English languages, the corresponding wikiHow data might suffer from smaller sizes and lower quality. Hence, we additionally consider the following cross-lingual transfer settings for non-English datasets: +en wikiHow +in-domain (+enWH+ID), a model is trained on wikiHow data in English, before it is trained on the dataset's in-domain training data; +en wikiHow zero-shot (+enWH 0-shot), a model is trained on wikiHow data in English, before it is directly applied to the dataset's evaluation data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.1"
},
{
"text": "We consider the 3 following benchmarks: The Snips dataset (Coucke et al., 2018 ) is a single-turn English dataset. It is one of the most cited dialog benchmarks in recent years, containing Table 3 : The accuracy of intent detection on multilingual datasets using XLM-RoBERTa.",
"cite_spans": [
{
"start": 58,
"end": 78,
"text": "(Coucke et al., 2018",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "Statistics of the datasets are shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.2"
},
{
"text": "We compare our models with the previous state-ofthe-art results of each dataset:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "\u2022 Ren and Xue (2020) proposed a Siamese neural network with triplet loss, achieving state-of-the-art results on Snips and FB-en;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "\u2022 Zhang et al. (2019) used multi-task learning to jointly learn intent detection and slot filling, achieving state-of-the-art results on FB-es and FB-th; \u2022 Ma et al. (2019) augmented the data via backtranslation to and from Chinese, achieving state-ofthe-art results on SGD.",
"cite_spans": [
{
"start": 154,
"end": 172,
"text": "\u2022 Ma et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "3.3"
},
{
"text": "After experimenting with base and large models, we use RoBERTa-large for the English datasets and XLM-RoBERTa-base for the multilingual dataset for best performances. All our models are implemented using the HuggingFace Transformer library 2 . We tune our model hyperparameters on the validation sets of the datasets we experiment with. However, in all cases, we use a unified setting which empirically performs well, using the Adam optimizer (Kingma and Ba, 2014) with an epsilon of 1e \u22128 , a learning rate of 5e \u22126 , maximum sequence length of 80 and 3 epochs. We variate the batch size from 2 to 16 according to the number of candidates in the multiple-choice task, to avoid running out of memory. We save the model every 1,000 training steps, and choose the model with the highest validation performance to be evaluated on the test set. We run our experiments on an NVIDIA GeForce RTX 2080 Ti GPU, with half-precision floating point format (FP16) with O1 optimization. Each epoch takes up to 90 minutes in the most resource intensive setting, i.e. running a RoBERTa-large on around 100,000 training examples of our wikiHow pretraining dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling Details",
"sec_num": "3.4"
},
{
"text": "The performance of RoBERTa on the English datasets (Snips, SGD, and FB-en) are shown in Table 2 . We repeat each experiment 20 times, report the mean accuracy, and calculate its p-value against the previous state-of-the-art result, using a one-sample and one-tailed t-test with a significance level of 0.05. Our models achieve state-of-the-art results using the available in-domain training data. Moreover, our wikiHow data enables our models to demonstrate strong performances in zero-shot settings with no in-domain training data, implying our models' strong potential to adapt to new domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 95,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "The performance of XLM-RoBERTa on the multilingual datasets (FB-en, FB-es, and FB-th) are shown in Table 3 . Our models achieve state-of-theart results on all 3 languages. While our wikiHow data in Spanish and Thai does improve models' performances, its effect is less salient than the English wikiHow data.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "Our experiments above focus on settings where all available in-domain training data are used. However, modern task-oriented dialog systems must rapidly adapt to burgeoning services (e.g. Alexa Skills) in different languages, where little training data are available. To simulate low-resource settings, we repeat the experiments with exponentially increasing number of training examples up to 1,000. We consider the models trained only on in-domain data (+ID), those first pretrained on our wikiHow data in corresponding languages (+WH+ID), and those first pretrained on our English wikiHow data (+enWH+ID) for FB-es and FB-th.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "The learning curves of each dataset are shown in Figure 1 . Though the vanilla transformers models (+ID) achieve close to state-of-the-art performance with access to the full training data (see Table 2 and 3), they struggle in the low-resource settings. When given up to 100 in-domain training examples, their accuracies are less than 50% on most datasets. In contrast, our models pretrained on our wikiHow data (+WH+ID) can reach over 75% accuracy given only 100 training examples on all datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 194,
"end": 201,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3.5"
},
{
"text": "As our model performances exceed 99% on Snips and FB-en, the concern arises that these intent detection datasets are \"solved\". We address this by performing error analysis and providing future outlooks for intent detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "4"
},
{
"text": "Our model misclassifies 7 instances in the Snips test set. Among them, 6 utterances include proper nouns on which intent classification is contingent. For example, the utterance \"please open Zvooq\" assumes the knowledge that Zvooq is a streaming service, and its labelled intent is \"Play Music.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.1"
},
{
"text": "Our model misclassifies 43 instances in the FBen test set. Among them, 10 has incorrect labels: e.g. the labelled intent of \"have alarm go off at 5 pm\" is \"Show Alarms,\" while our model prediction \"Set Alarm\" is in fact correct. 28 are ambiguous: e.g. the labelled intent of \"repeat alarm every weekday\" is \"Set Alarm,\" whereas that of \"add an alarm for 2:45 on every Monday\" is \"Modify Alarm.\" We only find 1 example an interesting edge case: the gold intent of \"remind me if there will be a rain forecast tomorrow\" is \"Find Weather,\" while our model incorrectly chooses \"Set Reminder.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.1"
},
{
"text": "By performing manual error analyses on our model predictions, we observe that most misclassified examples involve ambiguous wordings, wrong labels, or obscure proper nouns. Our observations imply that Snips and FB-en might be too easy to effectively evaluate future models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "4.1"
},
{
"text": "State-of-the-art models now achieve greater than 99% percent accuracy on standard benchmarks for intent detection. However, intent detection is far from being solved. The standard benchmarks only have a dozen intents, but future dialog systems will need to support many more functions with intents from a wide range of domains. To demonstrate that our pretrained models can adapt to unseen, open-domain intents, we hold out 5,000 steps (as utterances) with their corresponding goals (as intents) from our wikiHow dataset as a proxy of an intent detection dataset with more than 100,000 possible intents (all goals in wikiHow).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open-Domain Intent Detection",
"sec_num": "4.2"
},
{
"text": "For each step, we sample 100 goals with the highest embedding similarity to the correct goal, as most other goals are irrelevant. We then rank them for the likelihood that the step helps achieve them. Our RoBERTa model achieves a mean reciprocal rank of 0.462 and a 36% accuracy of ranking the correct goal first. As a qualitative example, given the step \"find the order that you want to cancel,\" the top 3 ranked steps are \"Cancel an Order on eBay\", \"Cancel an Online Order\", \"Cancel an Order on Amazon.\" This hints that our pretrained models' can work with a much wider range of intents than those in current benchmarks, and suggests that future intent detection research should focus on open domains, especially those with little data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Open-Domain Intent Detection",
"sec_num": "4.2"
},
{
"text": "By pretraining language models on wikiHow, we attain state-of-the-art results in 5 major intent detection datasets spanning 3 languages. The wideranging domains and languages of our pretraining resource enable our models to excel with few labelled examples in multilingual settings, and suggest open-domain intent detection is now feasible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://github.com/huggingface/ transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is based upon work supported in part by the DARPA KAIROS Program (contract FA8750-19-2-1004), the DARPA LwLL Program (contract FA8750-19-2-0201), and the IARPA BET-TER Program (contract 2019-19051600004). Approved for Public Release, Distribution Unlimited. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, or the U.S. Government.We thank the anonymous reviewers for their valuable feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Zero-shot learning of intent embeddings for expansion by convolutional deep structured semantic models",
"authors": [
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6045--6049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun-Nung Chen, Dilek Hakkani-T\u00fcr, and Xiaodong He. 2016. Zero-shot learning of intent embeddings for expansion by convolutional deep structured se- mantic models. In 2016 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 6045-6049. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Coucke",
"suffix": ""
},
{
"first": "Alaa",
"middle": [],
"last": "Saade",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Th\u00e9odore",
"middle": [],
"last": "Bluche",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Caulier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Leroy",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Doumouro",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Gisselbrecht",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Caltagirone",
"suffix": ""
},
{
"first": "Thibaut",
"middle": [],
"last": "Lavril",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.10190"
]
},
"num": null,
"urls": [],
"raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Zero-shot semantic parser for spoken language understanding",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Bassam",
"middle": [],
"last": "Jabaian",
"suffix": ""
},
{
"first": "Fabrice",
"middle": [],
"last": "Lef\u00e8vre",
"suffix": ""
}
],
"year": 2015,
"venue": "Sixteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lef\u00e8vre. 2015a. Zero-shot semantic parser for spo- ken language understanding. In Sixteenth Annual Conference of the International Speech Communica- tion Association.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Online adaptative zero-shot learning spoken language understanding using wordembedding",
"authors": [
{
"first": "Emmanuel",
"middle": [],
"last": "Ferreira",
"suffix": ""
},
{
"first": "Bassam",
"middle": [],
"last": "Jabaian",
"suffix": ""
},
{
"first": "Fabrice",
"middle": [],
"last": "Lef\u00e8vre",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2015.7178987"
]
},
"num": null,
"urls": [],
"raw_text": "Emmanuel Ferreira, Bassam Jabaian, and Fabrice Lef\u00e8vre. 2015b. Online adaptative zero-shot learn- ing spoken language understanding using word- embedding.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Joint multiple intent detection and slot labeling for goal-oriented dialog",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Gangadharaiah",
"suffix": ""
},
{
"first": "Balakrishnan",
"middle": [],
"last": "Narayanaswamy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "564--569",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1055"
]
},
"num": null,
"urls": [],
"raw_text": "Rashmi Gangadharaiah and Balakrishnan Narayanaswamy. 2019. Joint multiple intent detection and slot labeling for goal-oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 564-569, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Slot-gated modeling for joint slot filling and intent prediction",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chih-Wen Goo",
"suffix": ""
},
{
"first": "Yun-Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chih-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Huo",
"suffix": ""
},
{
"first": "Keng-Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "753--757",
"other_ids": {
"DOI": [
"10.18653/v1/N18-2118"
]
},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 753-757, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding user's query intent with wikipedia",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Lochovsky",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th International Conference on World Wide Web, WWW '09",
"volume": "",
"issue": "",
"pages": "471--480",
"other_ids": {
"DOI": [
"10.1145/1526709.1526773"
]
},
"num": null,
"urls": [],
"raw_text": "Jian Hu, Gang Wang, Fred Lochovsky, Jian-tao Sun, and Zheng Chen. 2009. Understanding user's query intent with wikipedia. In Proceedings of the 18th In- ternational Conference on World Wide Web, WWW '09, page 471-480, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Zeroshot learning across heterogeneous overlapping domains",
"authors": [
{
"first": "Anjishnu",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Pavankumar",
"middle": [],
"last": "Reddy Muddireddy",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Hoffmeister",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, and Bj\u00f6rn Hoffmeister. 2017. Zero- shot learning across heterogeneous overlapping do- mains.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zengfeng",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Dawei",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yiying",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaoyuan",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Kaijie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jianping",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiy- ing Yang, Xiaoyuan Yao, Kaijie Zhou, and Jianping Shen. 2019. An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Xiaoxue",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sunkara",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Khaitan",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05855"
]
},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2019. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. arXiv preprint arXiv:1909.05855.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Schemaguided dialogue state tracking task at dstc8",
"authors": [
{
"first": "Abhinav",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Xiaoxue",
"middle": [],
"last": "Zang",
"suffix": ""
},
{
"first": "Srinivas",
"middle": [],
"last": "Sunkara",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Khaitan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Schema- guided dialogue state tracking task at dstc8.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Intention detection based on siamese neural network with triplet loss",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "82242--82254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Ren and S. Xue. 2020. Intention detection based on siamese neural network with triplet loss. IEEE Access, 8:82242-82254.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cross-lingual transfer learning for multilingual task oriented dialog",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Rushin",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3795--3805",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1380"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795-3805, Min- neapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Convolutional neural network based triangular crf for joint intent detection and slot filling",
"authors": [
{
"first": "P",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Xu and R. Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detec- tion and slot filling. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 78-83.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A model of zero-shot learning of spoken language understanding",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Yazdani",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "244--249",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1027"
]
},
"num": null,
"urls": [],
"raw_text": "Majid Yazdani and James Henderson. 2015. A model of zero-shot learning of spoken language understand- ing. In Proceedings of the 2015 Conference on Em- pirical Methods in Natural Language Processing, pages 244-249, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint slot filling and intent detection via capsule neural networks",
"authors": [
{
"first": "Chenwei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5259--5267",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1519"
]
},
"num": null,
"urls": [],
"raw_text": "Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detec- tion via capsule neural networks. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5259-5267, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A joint learning framework with bert for spoken language understanding",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "168849--168858",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Zhang, Z. Zhang, H. Chen, and Z. Zhang. 2019. A joint learning framework with bert for spoken language understanding. IEEE Access, 7:168849- 168858.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "0",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Learning curves of models in low-resource settings. The vertical axis is the accuracy of intent detection, while the horizontal axis is the number of in-domain training examples of each task, distorted to log-scale.",
"type_str": "figure",
"uris": null
}
}
}
}