ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.30.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:28:41.711521Z"
},
"title": "OodGAN: Generative Adversarial Network for Out-of-Domain Data Generation",
"authors": [
{
"first": "Petr",
"middle": [],
"last": "Marek",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vishal",
"middle": [
"Ishwar"
],
"last": "Naik",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Vincent",
"middle": [],
"last": "Auvray",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Anuj",
"middle": [],
"last": "Goyal",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Alexa",
"middle": [],
"last": "Ai",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Detecting an Out-of-Domain (OOD) utterance is crucial for a robust dialog system. Most dialog systems are trained on a pool of annotated OOD data to achieve this goal. However, collecting the annotated OOD data for a given domain is an expensive process. To mitigate this issue, previous works have proposed generative adversarial networks (GAN) based models to generate OOD data for a given domain automatically. However, these proposed models do not work directly with the text. They work with the text's latent space instead, enforcing these models to include components responsible for encoding text into latent space and decoding it back, such as auto-encoder. These components increase the model complexity, making it difficult to train. We propose OodGAN, a sequential generative adversarial network (SeqGAN) based model for OOD data generation. Our proposed model works directly on the text and hence eliminates the need to include an auto-encoder. OOD data generated using OodGAN model outperforms state-of-the-art in OOD detection metrics for ROSTD (67% relative improvement in FPR 0.95) and OSQ datasets (28% relative improvement in FPR 0.95) (Zheng et al., 2020).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Detecting an Out-of-Domain (OOD) utterance is crucial for a robust dialog system. Most dialog systems are trained on a pool of annotated OOD data to achieve this goal. However, collecting the annotated OOD data for a given domain is an expensive process. To mitigate this issue, previous works have proposed generative adversarial networks (GAN) based models to generate OOD data for a given domain automatically. However, these proposed models do not work directly with the text. They work with the text's latent space instead, enforcing these models to include components responsible for encoding text into latent space and decoding it back, such as auto-encoder. These components increase the model complexity, making it difficult to train. We propose OodGAN, a sequential generative adversarial network (SeqGAN) based model for OOD data generation. Our proposed model works directly on the text and hence eliminates the need to include an auto-encoder. OOD data generated using OodGAN model outperforms state-of-the-art in OOD detection metrics for ROSTD (67% relative improvement in FPR 0.95) and OSQ datasets (28% relative improvement in FPR 0.95) (Zheng et al., 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "OOD detection is an essential task in AI voice assistants like Alexa, Siri, or Google Assistant. The task is to recognize whether a given user utterance belongs to the in-domain (IND) distribution or not. Users usually do not know the limitations of a voice application and assign requests which the system can not act upon. These requests are referred to as OOD since these do not belong to the application's domain. Voice assistants should be able to handle OOD utterances robustly by not taking unintended action or giving wrong or nonsensical responses leading to a poor user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Intent classification (IC) is one of the main tasks in a conversational system that selects the best intent given a user input. IC can be extended to support OOD detection in two different ways. The first one is to add OOD as another intent to the IC model, but this requires annotated OOD data for training. The second method is to use a threshold on the classifier's output probability distribution during the runtime. This method does not require OOD data for training necessarily. Nevertheless, it proves difficult to select the threshold in practice without it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The state-of-the-art IC algorithms are trained using neural networks to produce probability distribution over output classes and use cross-entropy loss. However, Lakshminarayanan et al. (2017) , and Guo et al. (2017) pointed out that the neural network classifier tends to be overconfident in its classification. This means that the classifier tends to assign a high probability for one class, even when the example was not seen in the training phase. Thus, such a classifier cannot correctly recognize if an example belongs to an IND or OOD distribution during runtime with any reasonable threshold value. In this paper, we focus on improving the performance of the threshold-based OOD detection method with the help of generated OOD data. Zheng et al. (2020) proposed to use negative entropy as an additional loss for the classification task in a neural network. The negative entropy loss trains the network to flatten the produced probability distribution as opposed to cross-entropy, which teaches the network to maximize the correct class probability. Thus, the idea is to apply cross-entropy loss on IND data and negative entropy loss on OOD data. The result is that IND data receives a high probability for the correct class, and OOD data receives low probabilities for all classes. Thanks to this fact, we can select a reasonable threshold on the output probability that will classify both IND and OOD data correctly. We need OOD data to train models in this way. However, the collection of OOD data is a manual and expensive process.",
"cite_spans": [
{
"start": 162,
"end": 192,
"text": "Lakshminarayanan et al. (2017)",
"ref_id": "BIBREF8"
},
{
"start": 199,
"end": 216,
"text": "Guo et al. (2017)",
"ref_id": "BIBREF2"
},
{
"start": 741,
"end": 760,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The IND data forms a small distribution cluster in the space of vector text representation. In principle, the rest of that space is covered by OOD data. Also, in real-world scenarios, most OOD data share patterns with IND data. Nevertheless, Zheng et al. (2020) demonstrated that training IC model with OOD data that are just outside IND distribution should be sufficient to handle most of the OOD requests during runtime.",
"cite_spans": [
{
"start": 242,
"end": 261,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a novel OOD data generation model OodGAN, which is an extension of SeqGAN . We use GAN to generate OOD data that share the same patterns as IND and are very close to IND distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed model aims to be deployed to Natural Language Understanding (NLU) frameworks offered by popular voice assistants like Amazon Alexa and Google Assistant. These NLU frameworks are offered to third-party developers to create voice applications. Third-party developers can define any number of IND intents and provide sample utterances for each to build voice applications. These voice applications should recognize OOD requests during run time without additional developer effort to provide OOD training data. The proposed model can be deployed in a NLU framework to generate application-specific OOD data that the IC model can use during training to recognize OOD requests robustly and improve the end-user experience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) We propose a novel and simple OOD data generation model OodGAN that improves on the model proposed by Zheng et al. (2020) . It works with a sequence of words directly unlike the previously proposed models, which work on latent space represented by auto-encoder. Our model eliminates the need for the auto-encoder, which reduces the overall size of the model.",
"cite_spans": [
{
"start": 106,
"end": 125,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) We evaluate our model on the ROSTD and OSQ datasets, and we show that OOD examples generated by OodGAN achieved state-of-the-art results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are three research areas relevant to our work: OOD detection, text generation and OOD generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Larson et al. 2019introduced a dataset for intent classification that includes OOD queries. They propose three baseline approaches for OOD detection that rely on OOD training data. Gangal et al. (2019) created a ROSTD dataset and explored likelihood ratio based approaches. Lee and Shalyminov (2019) proposed an OOD detection method that does not require OOD data by utilizing counterfeit OOD turns in the context of a dialog. Ryu et al. (2018) proposed an OOD detection system that uses only IND sentences to build a generative adversarial network in which the discriminator generates low scores for OOD sentences.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "Gangal et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 274,
"end": 299,
"text": "Lee and Shalyminov (2019)",
"ref_id": "BIBREF10"
},
{
"start": 427,
"end": 444,
"text": "Ryu et al. (2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Detection",
"sec_num": null
},
{
"text": "Donahue and Rumshisky (2018) proposed a twostep solution to text generation using auto-encoder and GAN that works with a low-dimensional representation of sentences. proposed a sequence generation framework SeqGAN that works directly on the text and hence eliminates the need for an auto-encoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Generation",
"sec_num": null
},
{
"text": "Zheng et al. 2020proposed a GAN based model to generate pseudo-OOD examples that are akin to IND input utterances. The model uses a denoising auto-encoder that is trained to map an input example into a latent code. The functions of the auto-encoder's parts are the following. The encoder learns to create a latent representation of the examples. The decoder learns to convert the vector of the latent representation into text. The model's generator produces vectors in the latent space. The discriminator evaluates the closeness of latent space vectors generated by the generator to real latent space vectors created by the encoder. Discriminator sends a training signal to the generator to force it to generate indistinguishable vectors from vectors encoded by the encoder. An auxiliary classifier trained on IND examples is introduced to force the generator to generate latent code belonging to OOD. The resulting utterances share patterns with IND examples but belong to OOD. . Left: Discriminator D is trained over the real data and the data generated by generator G. Right: Generator is trained by policy gradient where the final reward signal is provided by the discriminator and is passed back to the intermediate action value via Monte Carlo search. 3 Generative Adversarial Networks for Out-of-Domain Data Generation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Data Generation",
"sec_num": null
},
{
"text": "Reward R T Reward R C Pretraining IND Examples X Generated OOD Sequences Y Generator G \u03b8 Auxiliary Intent Classifier C \u03c8 Discriminator D \u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Out-of-Domain Data Generation",
"sec_num": null
},
{
"text": "The SeqGAN model proposed by Yu et al. 2017is a starting point for the proposed OodGAN. Se-qGAN is a sequence generation framework illustrated in Figure 1 . denote the problem of sequence generation as follows. Given a dataset of real-world structured sequences, train a \u03b8-parameterized generative model G \u03b8 to produce a sequence Y 1:",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": "3.1"
},
{
"text": "T = (y 1 , ..., y t , ...y T ), y t \u2208 Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": "3.1"
},
{
"text": "where Y is the vocabulary of candidate tokens. They apply reinforcement learning to this problem. In timestep t, the state s is the current produced tokens (y 1 , ..., y t\u22121 ) and the action a is the next token y t to select. They propose to additionally train a \u03c6parameterized discriminative model D \u03c6 that provides guidance for improving generator G \u03b8 . D \u03c6 produces a probability D \u03c6 (Y 1:T ) representing the probability of Y 1:T being a real sequence vs. a generated one. The discriminative model D \u03c6 is trained with real sequence data, labeled as positive examples, and synthetic sequences from the generative model G \u03b8 , labeled as negative examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": "3.1"
},
{
"text": "SeqGAN uses the REINFORCE algorithm (Williams, 1992) to train generative model G \u03b8 . Parameters of generative model G \u03b8 are updated at the same time by a policy gradient and Monte Carlo search based on the expected end reward received from the discriminative model D \u03c6 for the generated sequence. The reward is represented by a likelihood that the generated sequence will fool the discriminative model D \u03c6 . Thus the generator's goal is to generate a sequence that would fool the discriminator into considering it as real.",
"cite_spans": [
{
"start": 36,
"end": 52,
"text": "(Williams, 1992)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SeqGAN",
"sec_num": "3.1"
},
{
"text": "We propose OodGAN based on SeqGAN. There are two benefits of SeqGAN for our task of OOD data generation. SeqGAN produces sequences similar to the training data, and it works directly on input sequence unlike earlier model (Zheng et al., 2020) , which works on latent space. Eliminating the auto-encoder responsible for converting a sequence of words into latent space reduces the overall model size. Also, our experiments with Zheng et al. (2020) show a degradation in the overall performance due to the auto-encoder component (see the Results section for details).",
"cite_spans": [
{
"start": 222,
"end": 242,
"text": "(Zheng et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 427,
"end": 446,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "Since our task is to generate OOD data, we have the additional criterion that generated sequences should be close to the training IND sequences. However, we also want them not to belong to any IND intent class. We propose the OodGAN to achieve the two criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "The main difference between SeqGAN and OodGAN is the introduction of an auxiliary intent classifier. The auxiliary intent classifier C \u03c8 estimates the probability C \u03c8 (z i |Y ) of example Y belonging into intent class z i . The task of the auxiliary intent classifier is to produce an additional reward signal. The reward signal guides the generator to produce a sequence not belonging to any IND intent class. The reward R C \u03c8 coming from the auxiliary intent classifier for each generated example is defined as Shannon's Entropy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "R C \u03c8 = \u2212 m i=1 C \u03c8 (z i |Y ) \u2022 log(C \u03c8 (z i |Y ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": ", where m is the number of IND intent classes. The intuition for using Shannon's Entropy is that we want to reward a generator for producing examples for which the auxiliary intent classifier cannot clearly assign one of IND classes. In other words, the auxiliary classifier should assign a nearly uniform distribution across all intent classes for a good generated example. The generator obtains a high reward for such examples because the uniform distribution has the highest Shannon's Entropy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "We train the auxiliary intent classifier to predict one of the classes z 1...m for each training IND example X 1...n during the pre-training step. We do not have to retrain it during adversarial training because IND intent classes' distribution does not change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "The goal of the generator is to generate a sequence that maximizes the expected sum of rewards from discriminator D \u03c6 (the estimated probability of the sequence being real), and auxiliary intent classifier C \u03c8 (Shannon's Entropy calculated using estimated probabilities of sequence belonging to IND intent classes by auxiliary intent classifier).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "Empirically, we evaluated different training strategies. We found that optimizing generator G using only the discriminator's reward first, followed by using only the auxiliary intent classifier reward, and then repeating the process for each training batch produced the most stable results. This worked better than summing up the rewards from the discriminator and auxiliary intent classifier. When we tried summing up the two rewards, we noticed that the generator tended to collapse into a state in which it generated a single sequence highly rewarded by the auxiliary intent classifier, even though this did not happen for all training runs. We observed this situation even when we normalized rewards to a value between 0 and 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "We also observed that part of the examples generated by OodGAN is semantically similar to some IND training example or is generated multiple times. Examples that are identical or too close to IND examples are problematic and confuse the OOD classifier. Duplicated examples do not represent the OOD distribution effectively. For those reasons, we removed with an automatic filter the generated OOD examples that are identical or similar to IND examples or that are generated repeatedly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "To summarize, OodGAN's training procedure has the following steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "(1) Train Auxiliary classifier: First train auxiliary classifier to predict the classes z 1...m for IND data X 1...n until convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "(2) Train Generator as Language Model: Next, train the generator on the IND data X 1...n as a language model until it converges. Thanks to this step, it is easier for the generator to fool the discriminator from the start of the adversarial training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "(3) Train Discriminator: Generate adversarial examples from the generator. This training step helps the discriminator to provide a useful reward signal from the start of adversarial training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "(4) Adversarial Training: Perform adversarial training of generator and discriminator. There are three optimization steps for each training batch. First, optimize the generator using reward from discriminator as proposed by . Next, optimize the generator using a reward from the auxiliary classifier. Lastly, optimize the discriminator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OodGAN",
"sec_num": "3.2"
},
{
"text": "We conducted experiments on ROSTD (Gangal et al., 2019) and OSQ (Larson et al., 2019) datasets.",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Gangal et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 64,
"end": 85,
"text": "(Larson et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets",
"sec_num": "4"
},
{
"text": "\u2022 ROSTD contains three categories (alarm, reminder, and weather), each consisting of four intents. The dataset consists of 30,000 training, 4,000 validation and 8,000 testing IND examples. OOD examples were selected in a way that they do not belong to any category and do not share patterns with any IND examples. There are also no OOD examples in the training set of the dataset. The testing set contains 4,500 OOD examples. IND and OOD examples from ROSTD are listed in Table 5. \u2022 OSQ consists of 150 intents. The datases consists of 15,000 training, 3,000 validation and 4,500 testing IND examples. The dataset was created using Mechanical Turk. The turkers were given the name of the intent, and they were supposed to write intent examples fitting into the intent. The dataset authors manually went through examples and moved examples not fitting into the given intent class to the OOD class. In this way, OOD examples share the same patterns as IND examples. The OSQ dataset contains 100 training OOD examples. However, we decided not to use them for training due to the nature of our experiments. There are also 100 validation and 1,000 testing OOD examples.",
"cite_spans": [],
"ref_spans": [
{
"start": 472,
"end": 480,
"text": "Table 5.",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments 4.1 Datasets",
"sec_num": "4"
},
{
"text": "We evaluate the model on the downstream task of OOD data detection and measure the change in OOD data detection metrics. We designed experiments in the following way. We train the OodGAN on IND training examples as a first step. Next, we generate the OOD examples using the trained model of OodGAN. We generate the same number of OOD examples as a number of IND examples in the training set. In a third step, we train the threshold-based OOD detection model using crossentropy loss on training IND examples and negative entropy loss on generated OOD examples. In the last step, we evaluate both IND and OOD metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.2"
},
{
"text": "We evaluate the OodGAN by measuring metrics on the downstream task of OOD detection. We measure AUROC, AUPR, and FPRN metrics (Ren et al., 2019; Hendrycks and Gimpel, 2017; Hendrycks et al., 2019) to evaluate OodGAN's ability to generate OOD data that helps IC to distinguish IND and OOD input utterances. We treat OOD examples as the positive class.",
"cite_spans": [
{
"start": 126,
"end": 144,
"text": "(Ren et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 145,
"end": 172,
"text": "Hendrycks and Gimpel, 2017;",
"ref_id": "BIBREF3"
},
{
"start": 173,
"end": 196,
"text": "Hendrycks et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "\u2022 AUROC The area under the receiver operating characteristic (ROC) curve. The score says the probability that a randomly selected OOD example will have a higher predicted probability of being an OOD than a randomly selected IND example. Higher AUROC score is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "\u2022 AUPR The area under the precision-recall curve when OOD inputs are treated as positive samples. AUPR calculates the average precision score for all recall values. Intuitively, the higher the classification threshold we select, the more OOD will be classified as OOD. However, we risk that more IND will be classified as OOD. AUPR expresses this risk. Higher AUPR score is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "\u2022 FPRN The false-positive rate (FPR) when the true positive rate (TPR) is N%. FPRN metric is a practical value in real-world application since it evaluates an OOD detection performance at a particular threshold. Lower FPRN means there is a smaller chance of IND examples triggering false alarm (IND getting classified as OOD) when the model's performance on OOD example is N%. We report FPR when TPR is 0.95 and 0.90. Lower FPRN score is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "We consider FPRN metric as the most practical value in real-world application since it evaluates an OOD detection performance at a particular threshold. Lower FPRN means there is a smaller chance of IND examples triggering false alarm (IND getting classified as OOD) when the model correctly recognizes N% of OOD examples. We also measure IND accuracy that evaluates generated OOD data's influence on the IC's ability to recognize the intents of IND data correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "\u2022 IND accuracy The percentage of IND data that have assigned correct intent label. We expect that generated OOD examples cannot improve the IC's ability to recognize intent labels for ID. However, generated OOD examples can degrade the IC's ability to recognize IND intents. Thus, we measure the IND accuracy to evaluate whether generated OOD negatively impacts the IC. Higher IND accuracy is better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics",
"sec_num": "4.3"
},
{
"text": "We based our implementation on the Github repository 1 of SeqGAN implemented in PyTorch. The generator is one layer GRU recurrent neural network trained using Adam optimizer with a learning rate set to 0.001. Input to the generator is embedded with fastText embeddings (Joulin et al., 2016) trained on Wikipedia. The generator uses negative log-likelihood loss during LM training and policy gradient loss during GAN training. The discriminator is a two-layer bidirectional GRU recurrent neural network with a tanh activation function. Adagrad optimization is used for training the discriminator with a learning rate set to 0.1 and binary cross-entropy loss is optimized. The auxiliary classifier uses the convolutional neural network proposed by Kim (2014) , which has filters of size 2, 3, 4, and 5, and for each size, there are 256 filters. We used the LeakyReLU activation function and 0.5 dropout in output dense layers. The auxiliary classifier is trained using the Adam optimizer with a learning rate set to 0.0001 and cross-entropy loss is optimized. We show the comparison of number of parameters between OodGAN, SeqGAN, and Zheng et al. (2020) in Table 1 . Zheng et al. (2020) We first conducted experiments to replicate results reported by Zheng et al. (2020) on the OSQ dataset. # Parameters Zheng et al. (2020) 7M SeqGAN 800k OodGAN 2M Table 2 : OOD detection performance on the OSQ dataset with model proposed by Zheng et al. (2020) We created our implementation according to the paper's description because there is no publicly accessible implementation of their proposed model. We report results in Table 2 . We could not reproduce the number reported by Zheng et al. (2020) even though we implemented the model as was described in the paper. The experiments showed that the denoising auto-encoder is a weak part of the architecture. Its token accuracy of text reconstruction on the validation set was only 0.37%. Thus, the low performance of the autoencoder is the reason why the generator generates poor quality examples.",
"cite_spans": [
{
"start": 269,
"end": 290,
"text": "(Joulin et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 746,
"end": 756,
"text": "Kim (2014)",
"ref_id": "BIBREF7"
},
{
"start": 1113,
"end": 1152,
"text": "OodGAN, SeqGAN, and Zheng et al. (2020)",
"ref_id": null
},
{
"start": 1166,
"end": 1185,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 1250,
"end": 1269,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 1303,
"end": 1322,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 1426,
"end": 1445,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 1670,
"end": 1689,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1156,
"end": 1163,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1348,
"end": 1355,
"text": "Table 2",
"ref_id": null
},
{
"start": 1614,
"end": 1621,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.4"
},
{
"text": "First, we want to compare OodGAN with baselines. We selected two baselines to evaluate improvements of our proposed OodGAN. Our baselines for the ROSTD dataset is our implementation of Zheng et al. (2020) and the work of Gangal et al. (2019) . The baseline for the OSQ dataset is our implementation of Zheng et al. (2020) . Table 3 shows results on ROSTD dataset and Table 4 shows results on OSQ dataset. Results on ROSTD data are promising. They show around 65% relative improvement in FPR 0.95 compared to baseline of our implementation of Zheng et al. (2020) and around 5% absolute improvement in FPR 0.95 compared to baseline of Gangal et al. (2019) . For the more challenging OSQ dataset, there is around 28% relative improvement in both FPR 0.95 and FPR 0.90 compared to the baseline.",
"cite_spans": [
{
"start": 185,
"end": 204,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 221,
"end": 241,
"text": "Gangal et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 302,
"end": 321,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 542,
"end": 561,
"text": "Zheng et al. (2020)",
"ref_id": "BIBREF15"
},
{
"start": 633,
"end": 653,
"text": "Gangal et al. (2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 324,
"end": 331,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 367,
"end": 374,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on proposed model OodGAN",
"sec_num": "5.2"
},
{
"text": "To evaluate whether OodGAN helps the threshold-based OOD detection model to discriminate between OOD and IND examples, we plotted the histogram of the test data's maximum intent probability for system trained with and without ROSTD (Gangal et al., 2019) AUROC Figure 3 shows the histogram for ROSTD dataset. Probability scores for IND (blue) and OOD (red) data are spread out over all probability values when there are no OOD data used for model training. Thus it is hard to select a well discriminating threshold. The result of the model trained with OOD data is significantly better. The graph shows a clear separation between IND and OOD data, with IND data receiving high intent score and OOD data receiving a low score. The OOD detection model is combined with IC in many real-world applications. For this reason, the joint accuracy of OOD detection and IND intent recognition is an important metric. We show how the joint accuracy depends on the selected threshold in Figure 4 . To draw this graph, we select different thresholds, and we tag examples having an intent score below the threshold as OOD. We classify the intent for the rest. Our proposed approach leads to high joint accuracy of OOD detection and IND intent recognition with low threshold values. That confirms that models trained with generated OOD assign low scores to OOD and high scores to IND examples. The separation between generated OOD examples and IND examples is visible in t-SNE (Hinton and Roweis, 2002) visualization as well. Figure 5 shows the t-SNE visualization of IND and generated OOD data. We can notice that generated data create recognizable clusters close to IND data but do not mix with it. Finally, we list OOD examples generated by OodGAN in table 5.",
"cite_spans": [
{
"start": 232,
"end": 253,
"text": "(Gangal et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 974,
"end": 982,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1510,
"end": 1518,
"text": "Figure 5",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Results on proposed model OodGAN",
"sec_num": "5.2"
},
{
"text": "This paper proposed a novel OOD data generation model OodGAN that generates OOD examples that improved OOD detection performance in a dialog system. The model does not require any OOD training examples. Moreover, the model does not rely on the auto-encoder to map utterances into latent space, reducing the model size. It models the data generator as a stochastic policy in reinforcement learning instead. The model uses two rewards for the generator. The discriminator's reward guides the generator to generate examples as close to the IND data as possible. The auxiliary intent classifier's reward guides the generator to generate examples with low probabilities for all intent classes. Our experiments show that OOD examples generated by OodGAN improve the performance of the OOD detection problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/suragnair/seqGAN",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Adversarial text generation without reinforcement learning",
"authors": [
{
"first": "David",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.06640"
]
},
"num": null,
"urls": [],
"raw_text": "David Donahue and Anna Rumshisky. 2018. Adversar- ial text generation without reinforcement learning. arXiv preprint arXiv:1810.06640.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Likelihood ratios and generative classifiers for unsupervised out-of-domain detection in task oriented dialog",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Gangal",
"suffix": ""
},
{
"first": "Abhinav",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Arash",
"middle": [],
"last": "Einolghozati",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.12800"
]
},
"num": null,
"urls": [],
"raw_text": "Varun Gangal, Abhinav Arora, Arash Einolghozati, and Sonal Gupta. 2019. Likelihood ratios and gen- erative classifiers for unsupervised out-of-domain detection in task oriented dialog. arXiv preprint arXiv:1912.12800.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On calibration of modern neural networks",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian Q",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04599"
]
},
"num": null,
"urls": [],
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein- berger. 2017. On calibration of modern neural net- works. arXiv preprint arXiv:1706.04599.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A baseline for detecting misclassified and out-ofdistribution examples in neural networks",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of- distribution examples in neural networks. ArXiv, abs/1610.02136.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep anomaly detection with outlier exposure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Hendrycks",
"suffix": ""
},
{
"first": "Mantas",
"middle": [],
"last": "Mazeika",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Hendrycks, Mantas Mazeika, and Thomas G. Di- etterich. 2019. Deep anomaly detection with outlier exposure. ArXiv, abs/1812.04606.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stochastic neighbor embedding. Advances in neural information processing systems",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roweis",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "15",
"issue": "",
"pages": "857--864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton and Sam Roweis. 2002. Stochastic neighbor embedding. Advances in neural informa- tion processing systems, 15:857-864.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bag of tricks for efficient text classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.01759"
]
},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simple and scalable predictive uncertainty estimation using deep ensembles",
"authors": [
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pritzel",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Blundell",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "6402--6413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. 2017. Simple and scalable predic- tive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402-6413.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An evaluation dataset for intent classification and out-of-scope prediction",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Larson",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Mahendran",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Peper",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Parker",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"K"
],
"last": "Hill",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Kummerfeld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leach",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Lingjia",
"middle": [],
"last": "Laurenzano",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02027"
]
},
"num": null,
"urls": [],
"raw_text": "Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. 2019. An evalua- tion dataset for intent classification and out-of-scope prediction. arXiv preprint arXiv:1909.02027.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Contextual out-of-domain utterance handling with counterfeit data augmentation",
"authors": [
{
"first": "Sungjin",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Shalyminov",
"suffix": ""
}
],
"year": 2019,
"venue": "ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "7205--7209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungjin Lee and Igor Shalyminov. 2019. Contextual out-of-domain utterance handling with counterfeit data augmentation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7205-7209. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Likelihood ratios for out-of-distribution detection",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fertig",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Snoek",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Poplin",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"A"
],
"last": "Depristo",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"V"
],
"last": "Dillon",
"suffix": ""
},
{
"first": "Balaji",
"middle": [],
"last": "Lakshminarayanan",
"suffix": ""
}
],
"year": 2019,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ren, Peter J. Liu, E. Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, and Bal- aji Lakshminarayanan. 2019. Likelihood ratios for out-of-distribution detection. In NeurIPS.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Out-of-domain detection based on generative adversarial network",
"authors": [
{
"first": "Seonghan",
"middle": [],
"last": "Ryu",
"suffix": ""
},
{
"first": "Sangjun",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Hwanjo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Gary Geunbae",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "714--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seonghan Ryu, Sangjun Koo, Hwanjo Yu, and Gary Geunbae Lee. 2018. Out-of-domain detection based on generative adversarial network. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 714- 718.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1992,
"venue": "Machine learning",
"volume": "8",
"issue": "3-4",
"pages": "229--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Seqgan: Sequence generative adversarial nets with policy gradient",
"authors": [
{
"first": "Lantao",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2017,
"venue": "Thirty-first AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-first AAAI conference on artificial intelligence.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Out-of-domain detection for natural language understanding in dialog systems",
"authors": [
{
"first": "Yinhe",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Guanyi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2020,
"venue": "Speech, and Language Processing",
"volume": "28",
"issue": "",
"pages": "1198--1209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhe Zheng, Guanyi Chen, and Minlie Huang. 2020. Out-of-domain detection for natural language under- standing in dialog systems. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 28:1198-1209.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The illustration of SeqGAN",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Distributions of detection scores corresponding to the IND and OOD examples of the ROSTD dataset generated OOD examples.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Joint accuracy for ROSTD data across different threshold value. Points mark the highest joint accuracy of OOD detection and IND intent recognition.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "t-SNE visualization of the BERT feature vectors associated with the examples from the ROSTD dataset. IND examples are blue, testing OOD examples are red, and examples generated by OodGAN are green. IND Examples Should I be expecting rain today I need a new alarm for 8:30 am Show my reminders Show me the extended forecast please Snooze alarm for 5 more minutes OOD Examples Why do people watch television Where do pineapples grow Should I go to the mall today or tomorrow Tell me how to install a pool Transfer my PayPal balance to my bank Generated by OodGAN Remind me of my 4pm and Game of Thrones alarm When should I unpack Add day at workout please Give me my Sarasota appointment Do I need to pack to Galway this umbrella",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table><tr><td/><td colspan=\"5\">: Number of parameters</td><td/><td/></tr><tr><td>OSQ (Larson et al., 2019)</td><td colspan=\"2\">AUROC \u2191 AUPR \u2191</td><td>FPR 0.95</td><td>\u2193</td><td>FPR 0.90</td><td>\u2193</td><td>IND Acc.</td><td>\u2191</td></tr><tr><td>Results reported by Zheng et al. (2020)</td><td>95.4</td><td>98.9</td><td>25.0</td><td/><td>10.1</td><td/><td>93.3</td></tr><tr><td>Our implementation of Zheng et al. (2020)</td><td>88.79</td><td>58.22</td><td colspan=\"6\">36.49 26.87 88.00</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td colspan=\"9\">: OOD detection performance on the ROSTD</td></tr><tr><td>dataset</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>OSQ (Larson et al., 2019)</td><td colspan=\"2\">AUROC \u2191 AUPR \u2191</td><td>FPR 0.95</td><td>\u2193</td><td>FPR 0.90</td><td>\u2193</td><td>IND Acc.</td><td>\u2191</td></tr><tr><td>w.o. OOD</td><td>90.89</td><td>97.99</td><td colspan=\"6\">28.11 20.98 89.04</td></tr><tr><td>Our implementation of Zheng et al. (2020)</td><td>88.79</td><td>58.22</td><td colspan=\"6\">36.49 26.87 88.00</td></tr><tr><td>OodGAN</td><td>91.24</td><td>97.79</td><td colspan=\"6\">26.07 19.29 90.11</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table/>",
"text": "OOD detection performance on the OSQ dataset",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Examples sampled from the IND and OOD test set of the ROSTD dataset and OOD utterances generated using OodGAN model.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}