ACL-OCL / Base_JSON /prefixN /json /nlp4convai /2020.nlp4convai-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:55:13.675752Z"
},
"title": "Improving Slot Filling by Utilizing Contextual Information",
"authors": [
{
"first": "Amir",
"middle": [],
"last": "Pouran",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Ben",
"middle": [],
"last": "Veyseh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oregon",
"location": {
"settlement": "Eugene",
"region": "Oregon",
"country": "USA"
}
},
"email": ""
},
{
"first": "Franck",
"middle": [],
"last": "Dernoncourt",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Thien",
"middle": [
"Huu"
],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Oregon",
"location": {
"settlement": "Eugene",
"region": "Oregon",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Slot Filling (SF) is one of the sub-tasks of Spoken Language Understanding (SLU) which aims to extract semantic constituents from a given natural language utterance. It is formulated as a sequence labeling task. Recently, it has been shown that contextual information is vital for this task. However, existing models employ contextual information in a restricted manner, e.g., using self-attention. Such methods fail to distinguish the effects of the context on the word representation and the word label. To address this issue, in this paper, we propose a novel method to incorporate the contextual information in two different levels, i.e., representation level and task-specific (i.e., label) level. Our extensive experiments on three benchmark datasets on SF show the effectiveness of our model leading to new state-of-theart results on all three benchmark datasets for the task of SF.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Slot Filling (SF) is one of the sub-tasks of Spoken Language Understanding (SLU) which aims to extract semantic constituents from a given natural language utterance. It is formulated as a sequence labeling task. Recently, it has been shown that contextual information is vital for this task. However, existing models employ contextual information in a restricted manner, e.g., using self-attention. Such methods fail to distinguish the effects of the context on the word representation and the word label. To address this issue, in this paper, we propose a novel method to incorporate the contextual information in two different levels, i.e., representation level and task-specific (i.e., label) level. Our extensive experiments on three benchmark datasets on SF show the effectiveness of our model leading to new state-of-theart results on all three benchmark datasets for the task of SF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Slot Filling (SF) is the task of identifying the semantic constituents expressed in a natural language utterance. It is one of the sub-tasks of spoken language understanding (SLU) and plays a vital role in personal assistant tools such as Siri, Alexa, and Google Assistant. This task is formulated as a sequence labeling problem. For instance, in the given sentence \"Play Signe Anderson chant music that is newest.\", the goal is to identify \"Signe Anderson\" as \"artist\", \"chant music\" as \"music-item\" and \"newest\" as \"sort\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early work on SF has employed feature engineering methods to train statistical models, e.g., Conditional Random Field (Raymond and Riccardi, 2007) . Later, deep learning emerged as a promising approach for SF (Yao et al., 2014; Peng et al., 2015; Liu and Lane, 2016) . The success of deep models could be attributed to pre-trained word embeddings to generalize words and deep learning architectures to compose the word embeddings to induce effective representations. In addition to improving word representation using deep models, Liu and Lane (2016) showed that incorporating the context of each word into its representation could improve the results. Concretely, the effect of using context in word representation is two-fold: (1) Representation Level: As the meaning of the word is dependent on its context, incorporating the contextual information is vital to represent the true meaning of the word in the sentence (2) Task Level: For SF, the label of the word is related to the other words in the sentence and providing information about the other words, in prediction layer, could improve the performance. Unfortunately, the existing work employs the context in a restricted manner, e.g., via attention mechanism, which obfuscates the model about the two aforementioned effects of the contextual information.",
"cite_spans": [
{
"start": 118,
"end": 146,
"text": "(Raymond and Riccardi, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 209,
"end": 227,
"text": "(Yao et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 228,
"end": 246,
"text": "Peng et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 247,
"end": 266,
"text": "Liu and Lane, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 531,
"end": 550,
"text": "Liu and Lane (2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to address the limitations of the prior work to exploit the context for SF, in this paper, we propose a multi-task setting to train the model. More specifically, our model is encouraged to explicitly ensure the two aforementioned effects of the contextual information for the task of SF. In particular, in addition to the main sequence labeling task, we introduce new sub-tasks to ensure each effect. Firstly, in the representation level, we enforce the consistency between the word representations and their context. This enforcement is achieved via increasing the Mutual Information (MI) between these two representations. Secondly, in the task level, we propose two new sub-tasks: (1) To predict the label of the word solely from its context and (2) To predict which labels exist in the given sentence in a multi-label classification setting. By doing so, we encourage the model to encode task-specific features in the context of each word. Our extensive experiments on three benchmark datasets, empirically prove the effectiveness of the proposed model leading to new the state-of-the-art results on all three datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the literature, Slot Filling (SF), is categorized as one of the sub-tasks of spoken language understanding (SLU). Early work employed feature engineering for statistical models, e.g., Conditional Random Field (Raymond and Riccardi, 2007) . Due to the lack of generalisation ability of feature based models, deep learning based models superseded them (Yao et al., 2014; Peng et al., 2015; Kurata et al., 2016; Hakkani-T\u00fcr et al., 2016) . Also, joint models to simultaneously predict the intent of the utterance and to extract the semantic slots has also gained a lot of attention (Guo et al., 2014; Liu and Lane, 2016; Zhang and Wang, 2016; Wang et al., 2018; Goo et al., 2018; Qin et al., 2019; E et al., 2019) . In addition to the supervised settings, recently other setting such as progressive learning (Shen et al., 2019) or zero-shot learning has also been studied (Shah et al., 2019) . To the best of our knowledge, none of the existing work introduces a multi-task learning solely for the SF to incorporate the contextual information in both representation and task levels.",
"cite_spans": [
{
"start": 212,
"end": 240,
"text": "(Raymond and Riccardi, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 353,
"end": 371,
"text": "(Yao et al., 2014;",
"ref_id": "BIBREF19"
},
{
"start": 372,
"end": 390,
"text": "Peng et al., 2015;",
"ref_id": "BIBREF13"
},
{
"start": 391,
"end": 411,
"text": "Kurata et al., 2016;",
"ref_id": "BIBREF9"
},
{
"start": 412,
"end": 437,
"text": "Hakkani-T\u00fcr et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 582,
"end": 600,
"text": "(Guo et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 601,
"end": 620,
"text": "Liu and Lane, 2016;",
"ref_id": "BIBREF10"
},
{
"start": 621,
"end": 642,
"text": "Zhang and Wang, 2016;",
"ref_id": "BIBREF21"
},
{
"start": 643,
"end": 661,
"text": "Wang et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 662,
"end": 679,
"text": "Goo et al., 2018;",
"ref_id": "BIBREF4"
},
{
"start": 680,
"end": 697,
"text": "Qin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 698,
"end": 713,
"text": "E et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 808,
"end": 827,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 872,
"end": 891,
"text": "(Shah et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our model is trained in a multi-task setting in which the main task is slot filling to identify the best possible sequence of labels for the given sentence. In the first auxiliary task we aim to increase consistency between the word representation and its context. The second auxiliary task is to enhance task specific information in contextual information. In this section, we explain each of these tasks in more details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "Formally, the input to a SF model is a sequence of words X = [x 1 , x 2 , . . . , x n ] and our goal is to predict the sequence of labels Y = [y 1 , y 2 , . . . , y n ]. In our model, the word x i is represented by vector e i which is the concatenation of the pre-trained word embedding and POS tag embedding of the word x i . In order to obtain a more abstract representation of the words, we employ a Bi-directional Long Short-Term Memory (BiLSTM) over the word rep-resentations E = [e 1 , e 2 , . . . , e n ] to generate the abstract vectors",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Filling",
"sec_num": "3.1"
},
{
"text": "H = [h 1 , h 2 , . . . , h n ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Filling",
"sec_num": "3.1"
},
{
"text": "The vector h i is the final representation of the word x i and is fed into a two-layer feed forward neural net to compute the label scores s i for the given word, s i = F F (h i ). As the task of SF is formulated as a sequence labeling task, we exploit a conditional random field (CRF) layer as the final layer of SF prediction. More specifically, the predicted label scores S = [s 1 , s 2 , . . . , s n ] are provided as emission score to the CRF layer to predict the label sequence\u0176 = [\u0177 1 ,\u0177 2 , . . . ,\u0177 n ]. To train the model, the negative log-likelihood is used as the loss function for SF prediction, i.e., L pred .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Slot Filling",
"sec_num": "3.1"
},
{
"text": "In this sub-task we aim to increase the consistency between the word representation and its context. To obtain the context of each word, we use max pooling over the outputs of the BiLSTM for all words of the sentence excluding the word itself,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "h c i = M axP ooling(h 1 , h 2 , ..., h n /h i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "We aim to increase the consistency between vectors h i and h c i . To this end, we propose to maximize the Mutual Information (MI) between the word representation and its context. In information theory, MI evaluates how much information we know about one random variable if the value of another variable is revealed. Formally, the mutual information between two random variable X 1 and X 2 is obtained by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M I(X 1 , X 2 ) = X 1 X 2 P (X 1 , X 2 )\u2022 log P (X 1 , X 2 ) P (X 1 )P (X 2 ) dX 1 dX 2",
"eq_num": "(1)"
}
],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "Using this definition of MI, we can reformulate the MI equation as KL-Divergence between the joint distribution P X 1 X 2 = P (X 1 , X 2 ) and the product of marginal distributions P X 1 X 2 = P (X 1 )P (X 2 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "M I(X 1 , X 2 ) = D KL (P X 1 X 2 ||P X 1 X 2 ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "Based on this understanding of MI, if the two random variables are dependent then the mutual information between them (i.e. the KL-Divergence in Equation 2) would be the highest. Consequently, if the representations h i and h c i are encouraged to have large mutual information, we expect them to share more information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "Computing the KL-Divergence in equation 2 could be prohibitively expensive (Belghazi et al., 2018 ), so we need to estimate it. To this end, we exploit the adversarial method introduced in (Hjelm et al., 2019) . In this method, a discriminator is employed to distinguish between samples from the joint distribution and the product of the marginal distributions to estimate the KL-Divergence in Equation 2. In our case, the sample from joint distribution is the concatenation [h i : h c i ] and the sample from the product of the marginal distribution is the concatenation",
"cite_spans": [
{
"start": 75,
"end": 97,
"text": "(Belghazi et al., 2018",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 189,
"end": 209,
"text": "(Hjelm et al., 2019)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "[h i : h c j ] where h c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "j is a context vector randomly chosen from the words in the mini-batch. Formally:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L disc = 1 n \u03a3 n i=1 \u2212 (log(D[h i , h c i ])+ log(1 \u2212 D([h i , h c j ])))",
"eq_num": "(3)"
}
],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "Where D is the discriminator. This loss is added to the final loss function of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Consistency between Word and Context",
"sec_num": "3.2"
},
{
"text": "In addition to increasing consistency between the word representation and its context representation, we aim to increase the task-specific information in contextual representations. To this end, we train the model on two auxiliary tasks. The first one aims to use the context of each word to predict the label of that word. The goal of the second auxiliary task is to use the global context information to predict sentence level labels. We describe each of these tasks in more details in the following subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction by Contextual Information",
"sec_num": "3.3"
},
{
"text": "In this sub-task, we use the context representations of each word to predict its label. It will increase the information encoded in the context of the word about the label of the word. We use the same context vector h c i for the i-th word as described in the previous section. This vector is fed into a two-layer feed forward neural network with a softmax layer at the end to output the probabilities for each class,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "P i (.|{x 1 , x 2 , ..., x n }/x i ) = sof tmax(F F (h c i )).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "Finally, we use the following negative log-likelihood as the loss function to be optimized during training:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L wp = 1 n \u03a3 n i=1 \u2212 log(P i (y i |{x 1 , x 2 , ..., x n }/x i ))",
"eq_num": "(4)"
}
],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "Predicting Sentence Labels",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "The word label prediction enforces the context of each word to contain information about its label but it lacks a global view about the entire sentence. In order to increase the global information about the sentence in the representation of the words, we aim to predict the labels existing in a sentence from the representations of its words. More specifically, we introduce a new sub-task to predict which labels exists in the given sentence. We formulate this task as a multi-label classification problem. Formally, for each sentence, we predict the binary vector",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "Y s = [y s 1 , y s 2 , ..., y s |L| ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "where L is the set of all possible word labels. In the vector Y s , y s i is 1 if the sentence X contains i-th label from the label set L otherwise it is 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "To predict vector Y s , we first compute the representation of the sentence. This representation is obtained by max pooling over the outputs of the BiLSTM, H = M axP ooling(h 1 , h 2 , ..., h n ). Afterwards, the vector H is fed into a two-layer feed forward neural net with a sigmoid activation function at the end to compute the probability distribution of Y s (i.e., P k (.|x 1 , x 2 , ..., x n ) = \u03c3 k (F F (H)) for k-th label in L). Note that since this task is a multi-label classification, the number of neurons at the final layer is equal to |L|. We optimize the following binary cross-entropy loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L sp = 1 |L| \u03a3 |L| k=1 \u2212 (y s k \u2022 log(P k (y s k |x 1 , x 2 , ..., x n ))+ (1 \u2212 y s k ) \u2022 log(1 \u2212 P k (y s k |x 1 , x 2 , ..., x n )))",
"eq_num": "(5)"
}
],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "Finally, to train the entire model we optimize the following combined loss:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "L = L pred + \u03b1L discr + \u03b2L wp + \u03b3L sp (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "where \u03b1, \u03b2 and \u03b3 are the trade-off parameters to be tuned based on the development set performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicting Word Label",
"sec_num": null
},
{
"text": "We evaluate our model on three SF datasets. Namely, we employ ATIS (Hemphill et al., 1990) , SNIPS (Coucke et al., 2018) and EditMe (Manuvinakurike et al., 2018) . ATIS and SNIPS are two widely adopted SF dataset and EditMe is a SF dataset for editing images with four slot labels (i.e., Action, Object, Attribute and Value). The statistics of the datasets are presented in the Appendix A. Based on the experiments on EditMe development set, the following parameters are selected: GloVe embedding with 300 dimensions to initialize word embedding ; 200 dimensions for the all hidden layers in LSTM and feed forward neural net; 0.1 for trade-off parameters \u03b1, \u03b2 and \u03b3; and Adam optimizer with learning rate 0.001. Following previous work, we use F1-score to evaluate the model.",
"cite_spans": [
{
"start": 67,
"end": 90,
"text": "(Hemphill et al., 1990)",
"ref_id": "BIBREF7"
},
{
"start": 99,
"end": 120,
"text": "(Coucke et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 132,
"end": 161,
"text": "(Manuvinakurike et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Parameters",
"sec_num": "4.1"
},
{
"text": "We compare our model with other deep learning based models for SF. Namely, we compare the proposed model with Joint Seq (Hakkani-T\u00fcr et al., 2016) , Attention-Based (Liu and Lane, 2016) , Sloted-Gated (Goo et al., 2018) , SF-ID (E et al., 2019) , CAPSULE-NLU , and SPTID (Qin et al., 2019) . Note that we compare our model with the single-task version of these baselines. We also compare our model with other sequence labeling models which are not specifically proposed for SF. Namely, we compare the model with CVT (Clark et al., 2018) and GCDT . CVT aims to improve input representation using improving partial views and GCDT exploits contextual information to enhance word representations via concatenation of context and word representation. Table 1 reports the performance of the model and baselines. The proposed model outperforms all baselines in all datasets. Among all baselines, GCDT achieves best results on two out of three datasets. This superiority shows the importance of explicitly incorporating the contextual information into word representation for SF. However, the proposed model improves the performance substantially on all datasets by explicitly encouraging the consistency between a word and its context in representation level and task-specific (i.e., label) level. Also, Table 1 shows that EditMe dataset is more challenging than the other datasets, despite having fewer slot types. This difficulty could be explained by the limited number of training examples and more diversity in sentence structures in this dataset.",
"cite_spans": [
{
"start": 120,
"end": 146,
"text": "(Hakkani-T\u00fcr et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 165,
"end": 185,
"text": "(Liu and Lane, 2016)",
"ref_id": "BIBREF10"
},
{
"start": 188,
"end": 219,
"text": "Sloted-Gated (Goo et al., 2018)",
"ref_id": null
},
{
"start": 222,
"end": 244,
"text": "SF-ID (E et al., 2019)",
"ref_id": null
},
{
"start": 271,
"end": 289,
"text": "(Qin et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 516,
"end": 536,
"text": "(Clark et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 746,
"end": 753,
"text": "Table 1",
"ref_id": null
},
{
"start": 1297,
"end": 1304,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.2"
},
{
"text": "Our model consists of three major components: (1) MI: Increasing mutual information between word and its context representations (2) WP: Predicting the label of the word using its context to increase word level task-specific information in the word context (3) SP: Predicting which labels exist in the given sentence in a multi-label classification to increase sentence level task-specific information in word representations. In order to analyze the contribution of each of these components, we also evaluate the model performance when we remove one of the components and retrain the model. The results are reported in Table 2 . This Table shows that all components are required for the model to have its best performance. Among all components, the word level prediction using the contextual information has the major contribution to the model performance. This fact shows that contextual information trained to be informative about the final task is necessary to obtain the representations which could boost the performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 620,
"end": 627,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 635,
"end": 646,
"text": "Table shows",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ablation Study",
"sec_num": "4.4"
},
{
"text": "In this work, we introduced a new deep model for the task of Slot Filling (SF). In a multi-task setting, our model increases the mutual information between the word representation and its context, improves label information in the context and predicts which concepts are expressed in the given sentence. Our experiments on three benchmark datasets show the effectiveness of our model by achieving the state-of-the-art results on all datasets for the SF task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "In our experiments, we employ three benchmark datasets, ATIS, SNIPS and EditMe. Table 3 presents the statistics of these three datasets. Moreover, in order to provide more insight into the Ed-itMe dataset, we report the labels statistics of this dataset in Table 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 87,
"text": "Table 3",
"ref_id": null
},
{
"start": 257,
"end": 264,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Dataset Statistics",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mutual information neural estimation",
"authors": [
{
"first": "Mohamed",
"middle": [
"Ishmael"
],
"last": "Belghazi",
"suffix": ""
},
{
"first": "Aristide",
"middle": [],
"last": "Baratin",
"suffix": ""
},
{
"first": "Sai",
"middle": [],
"last": "Rajeswar",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R",
"middle": [
"Devon"
],
"last": "Hjelm",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
}
],
"year": 2018,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Ishmael Belghazi, Aristide Baratin, Sai Ra- jeswar, Sherjil Ozair, Yoshua Bengio, R. Devon Hjelm, and Aaron C. Courville. 2018. Mutual in- formation neural estimation. In ICML.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Semi-supervised sequence modeling with cross-view training",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2018,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Minh-Thang Luong, Christopher D Man- ning, and Quoc V Le. 2018. Semi-supervised sequence modeling with cross-view training. In EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces",
"authors": [
{
"first": "Alice",
"middle": [],
"last": "Coucke",
"suffix": ""
},
{
"first": "Alaa",
"middle": [],
"last": "Saade",
"suffix": ""
},
{
"first": "Adrien",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Th\u00e9odore",
"middle": [],
"last": "Bluche",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Caulier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Leroy",
"suffix": ""
},
{
"first": "Cl\u00e9ment",
"middle": [],
"last": "Doumouro",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Gisselbrecht",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Caltagirone",
"suffix": ""
},
{
"first": "Thibaut",
"middle": [],
"last": "Lavril",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alice Coucke, Alaa Saade, Adrien Ball, Th\u00e9odore Bluche, Alexandre Caulier, David Leroy, Cl\u00e9ment Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, and et al. 2018. Snips voice platform: an embedded spoken language understand- ing system for private-by-design voice interfaces. In arXiv.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A novel bi-directional interrelated model for joint intent detection and slot filling",
"authors": [
{
"first": "E",
"middle": [],
"last": "Haihong",
"suffix": ""
},
{
"first": "Peiqing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Zhongfu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Meina",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haihong E, Peiqing Niu, Zhongfu Chen, and Meina Song. 2019. A novel bi-directional interrelated model for joint intent detection and slot filling. In ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Slot-gated modeling for joint slot filling and intent prediction",
"authors": [
{
"first": "Guang",
"middle": [],
"last": "Chih-Wen Goo",
"suffix": ""
},
{
"first": "Yun-Kai",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Chih-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Tsung-Chieh",
"middle": [],
"last": "Huo",
"suffix": ""
},
{
"first": "Keng-Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun- Nung Chen. 2018. Slot-gated modeling for joint slot filling and intent prediction. In NAACL-HLT.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Joint semantic utterance classification and slot filling with recursive neural networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Guo, Gokhan Tur, Wen-tau Yih, and Geoffrey Zweig. 2014. Joint semantic utterance classification and slot filling with recursive neural networks. In SLT.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-domain joint semantic frame parsing using bi-directional rnn-lstm",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek Hakkani-T\u00fcr, G\u00f6khan T\u00fcr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye- Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Inter- speech.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The ATIS spoken language systems pilot corpus",
"authors": [
{
"first": "Charles",
"middle": [
"T"
],
"last": "Hemphill",
"suffix": ""
},
{
"first": "John",
"middle": [
"J"
],
"last": "Godfrey",
"suffix": ""
},
{
"first": "George",
"middle": [
"R"
],
"last": "Doddington",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language sys- tems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning deep representations by mutual information estimation and maximization",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "R Devon Hjelm",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Fedorov",
"suffix": ""
},
{
"first": "Karan",
"middle": [],
"last": "Lavoie-Marchildon",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Grewal",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2019,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Devon Hjelm, Alex Fedorov, Samuel Lavoie- Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In ICLR.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Leveraging sentence-level information with encoder LSTM for semantic slot filling",
"authors": [
{
"first": "Gakuto",
"middle": [],
"last": "Kurata",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gakuto Kurata, Bing Xiang, Bowen Zhou, and Mo Yu. 2016. Leveraging sentence-level information with encoder LSTM for semantic slot filling. In EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. In arXiv.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "GCDT: A global context enhanced deep transition architecture for sequence labeling",
"authors": [
{
"first": "Yijin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fandong",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Jinchao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijin Liu, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, and Jie Zhou. 2019. GCDT: A global context enhanced deep transition architecture for se- quence labeling. In ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Edit me: A corpus and a framework for understanding natural language image editing",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Manuvinakurike",
"suffix": ""
},
{
"first": "Jacqueline",
"middle": [],
"last": "Brixey",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Doo",
"middle": [
"Soon"
],
"last": "Kim",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Kallirroi",
"middle": [],
"last": "Georgila",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Manuvinakurike, Jacqueline Brixey, Trung Bui, Walter Chang, Doo Soon Kim, Ron Artstein, and Kallirroi Georgila. 2018. Edit me: A corpus and a framework for understanding natural language image editing. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Euro- pean Language Resources Association (ELRA).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recurrent neural networks with external memory for spoken language understanding",
"authors": [
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2015,
"venue": "NLPCC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baolin Peng, Kaisheng Yao, Li Jing, and Kam-Fai Wong. 2015. Recurrent neural networks with exter- nal memory for spoken language understanding. In NLPCC.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A stack-propagation framework with token-level intent detection for spoken language understanding",
"authors": [
{
"first": "Libo",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Yangming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haoyang",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation frame- work with token-level intent detection for spoken language understanding. In EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Generative and discriminative algorithms for spoken language understanding",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Raymond",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Riccardi",
"suffix": ""
}
],
"year": 2007,
"venue": "ISCA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Raymond and Giuseppe Riccardi. 2007. Gen- erative and discriminative algorithms for spoken lan- guage understanding. In ISCA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Robust zero-shot cross-domain slot filling with example values",
"authors": [
{
"first": "J",
"middle": [],
"last": "Darsh",
"suffix": ""
},
{
"first": "Raghav",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Fayazi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Darsh J Shah, Raghav Gupta, Amir A Fayazi, and Dilek Hakkani-Tur. 2019. Robust zero-shot cross-domain slot filling with example values. arXiv.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A progressive model to enable continual learning for semantic slot filling",
"authors": [
{
"first": "Yilin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yilin Shen, Xiangyu Zeng, and Hongxia Jin. 2019. A progressive model to enable continual learning for semantic slot filling. In EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A bimodel based RNN semantic frame parsing model for intent detection and slot filling",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yilin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Hongxia",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Wang, Yilin Shen, and Hongxia Jin. 2018. A bi- model based RNN semantic frame parsing model for intent detection and slot filling. In NAANCL-HLT.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Spoken language understanding using long short-term memory neural networks",
"authors": [
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Baolin",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "Yangyang",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2014,
"venue": "SLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Ge- offrey Zweig, and Yangyang Shi. 2014. Spoken lan- guage understanding using long short-term memory neural networks. In SLT.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Joint slot filling and intent detection via capsule neural networks",
"authors": [
{
"first": "Chenwei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yaliang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2019,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detec- tion via capsule neural networks. In ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A joint model of intent determination and slot filling for spoken language understanding",
"authors": [
{
"first": "Xiaodong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spo- ken language understanding. In IJCAI.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Test F1-score for the ablated models",
"num": null
}
}
}
}