Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E17-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:52:14.469657Z"
},
"title": "Dialog state tracking, a machine reading approach using Memory Network",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Perez",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"pdf_parse": {
"paper_id": "E17-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the core components of state-of-the-art and industrially deployed dialog systems is a dialog state tracker. Its purpose is to provide a compact representation of a dialog produced from past user inputs and system outputs which is called the dialog state. The dialog state summarizes the infor- * Work carried out as an intern at XRCE mation needed to successfully maintain and finish a dialog, such as users' goals or requests. In the simplest case of a so-called slot-filling schema, the state is composed of a predefined set of variables with a predefined domain of expression for each of them. As a matter of fact, in the recent context of end-to-end trainable machine learnt dialog systems, state tracking remains a central element of such architectures . Current models, mainly based on the principle of discriminative learning, tend to share three common limitations. First, the tracking task is perform using a fixed window of the past dialog utterances as support for decision. Second, the possible correlations between the set of tracked variables are not leveraged and individual trackers tend to be learnt independently. Third, the tracking task is summarized as the capability of inferring values for a predefined set of latent variables. Starting from these observations, we propose to formalize the task of state tracking as a particular instance of machine reading problem. Indeed, these formalization and the proposed resolution model called MemN2N allow to define a tracker that is be able to decide at the utterance level on the basis on the current entire dialog. Indeed, the model learns to focus its attention on the meaningful parts of the dialog regarding the currently asked slot and can eventually capture possible correlation between slots. As far as our knowledge goes, it is the first attempt to explicitly frame the task of dialog state tracking as a machine reading problem. Finally, such formalization allows for the implementation of approximate reasoning capability that has been shown to be crucial for any machine reading tasks while extending the task from slot instantiation to question answering. This paper is structured as follows, Section 2 recalls the main definitions associated to transactional dialogs and describes the associated problem of statistical dialog state tracking with both the generative and discriminative approaches. At the end of this section, the limitations of the current models in terms of necessary annotations and reasoning capabilities are addressed. Then, Section 3 depicts the proposed machine reading model for dialog state tracking and proposes to extend a state of the art dialog state tracking dataset, DSTC-2, to several simple reasoning capabilities. Section 4 illustrates the approach with experimental results obtained using a state of the art benchmark for dialog state tracking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Dialog state tracking 2.1 Main Definitions A dialog state tracking task is formalized as follows: at each turn of a dyadic dialog, the dialog agent chooses a dialog act d to express and the user answers with an utterance u. In the simplest case, the dialog state at each turn is defined as a distribution over a set of predefined variables, which define the structure of the state (Williams et al., 2005) . This classic state structure is commonly called slot filling or semantic frame. In this context, the state tracking task consists of estimating the value of a set of predefined variables in order to perform a procedure or transaction which is the purpose of the dialog. Typically, a natural language understanding module processes the user utterance and generates an Nbest list",
"cite_spans": [
{
"start": 383,
"end": 406,
"text": "(Williams et al., 2005)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "o = {(d 1 , f 1 ), . . . , (d n , f n )},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where d i is the hypothesized user dialog act and f i is its confidence score. Various approaches have been proposed to define dialog state trackers. The traditional methods used in most commercial implementations use hand-crafted rules that typically rely on the most likely result from an NLU module (Yeh et al., 2014) and hardly models uncertainty. However, these rule-based systems are prone to frequent errors as the most likely result is not always the correct one (Williams, 2014) .",
"cite_spans": [
{
"start": 302,
"end": 320,
"text": "(Yeh et al., 2014)",
"ref_id": "BIBREF34"
},
{
"start": 471,
"end": 487,
"text": "(Williams, 2014)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "More recent methods employ statistical approaches to estimate the posterior distribution over the dialog states allowing them to leverage the uncertainty of the results of the NLU module. In the simplest case where no ASR and NLU modules are employed, as in a text based dialog system (Henderson et al., 2013) , the utterance is taken as the observation using a so-called bag of words representation. If an NLU module is available, stan-dardized dialog act schemes can be considered as observations (Bunt et al., 2010) . Furthermore, if prosodic information is available from the ASR component of the dialog system (Milone and Rubio, 2003) , it can also be considered as part of the observation definition. A statistical dialog state tracker maintains, at each discrete time step t, the probability distribution over states, b(s t ), which is the system's belief over the state. The actual slot filling process is composed of the cyclic tasks of information gathering and integration, in other words -dialog state tracking. In such framework, the purpose is to estimate as early as possible in the course of a given dialog the correct instantiation of each variable. In the following, we will assume the state is represented as a set of variables with a set of known possible values associated to each of them. Furthermore, in the context of this paper, only the bag of words has been considered as an observation at a given turn but dialog acts or detected named entity provided by an SLU module could have also been incorporated.",
"cite_spans": [
{
"start": 285,
"end": 309,
"text": "(Henderson et al., 2013)",
"ref_id": "BIBREF8"
},
{
"start": 499,
"end": 518,
"text": "(Bunt et al., 2010)",
"ref_id": "BIBREF3"
},
{
"start": 615,
"end": 639,
"text": "(Milone and Rubio, 2003)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two statistical approaches have been considered for maintaining the distribution over a state given sequential NLU output. First, the discriminative approach aims to model the posterior probability distribution of the state at time t + 1 with regard to state at time t and observations z 1:t . Second, the generative approach attempts to model the transition probability and the observation probability in order to exploit possible interdependencies between hidden variables that comprise the dialog state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A generative approach to dialog state tracking computes the belief over the state using Bayes' rule, using the belief from the last turn b(s t\u22121 ) as a prior and the likelihood given the user utterance hypotheses p(z t |s t ), with z t the observation gathered at time t. In prior works (Williams et al., 2005) , the likelihood is factored and some independence assumptions are made:",
"cite_spans": [
{
"start": 287,
"end": 310,
"text": "(Williams et al., 2005)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Dialog State Tracking",
"sec_num": "2.2"
},
{
"text": "b t \u221d \u2211 s t\u22121 ,z t p(s t |z t , s t\u22121 )p(z t |s t\u22121 )b(s t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Dialog State Tracking",
"sec_num": "2.2"
},
{
"text": ". A typical generative model uses a factorial hidden Markov model (Ghahramani and Jordan, 1997) . In this family of approaches, scalability is considered as one of the main issues. One way to reduce the amount of computation is to group the states into partitions, as proposed in the Hidden Information State (HIS) model (Gasic and Young, 2011) . Other approaches to cope with the scalability problem in dialog state tracking is to adopt a factored dynamic Bayesian network by making conditional independence assumptions among dialog state components, and then using approximate inference algorithms such as loopy belief propagation (Thomson and Young, 2010) or a blocked Gibbs sampling as (Raux and Ma, 2011) . To cope with such limitations, discriminative methods of state tracking presented in the next part of this section aim at directly model the posterior distribution of the tracked state using a chosen parametric form.",
"cite_spans": [
{
"start": 66,
"end": 95,
"text": "(Ghahramani and Jordan, 1997)",
"ref_id": "BIBREF6"
},
{
"start": 321,
"end": 344,
"text": "(Gasic and Young, 2011)",
"ref_id": "BIBREF5"
},
{
"start": 690,
"end": 709,
"text": "(Raux and Ma, 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Dialog State Tracking",
"sec_num": "2.2"
},
{
"text": "The discriminative approach of dialog state tracking computes the belief over a state via a parametric model that directly represents the belief b(s t+1 ) = p(s s+1 |s t , z t ). For example, Maximum Entropy has been widely used in the discriminative approach (Metallinou et al., 2013) . It formulates the belief as follows: b(s) = P(s|x) = \u03b7.e w T \u03c6 (x,s) , where \u03b7 is the normalizing constant,",
"cite_spans": [
{
"start": 260,
"end": 285,
"text": "(Metallinou et al., 2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dialog State Tracking",
"sec_num": "2.3"
},
{
"text": "x = (d u 1 , d m 1 , s 1 , . . . , d u t , d m t , s t ) is the history of user dialog acts, d u i , i \u2208 {1, . . . ,t}, the system dialog acts, d m i , i \u2208 {1, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dialog State Tracking",
"sec_num": "2.3"
},
{
"text": ". . ,t}, and the sequence of states leading to the current dialog turn at time t. Then, \u03c6 (.) is a vector of feature functions on x and s. Finally, w is the set of model parameters to be learned from annotated dialog data. Finally, deep neural models, performing on a sliding window of features extracted from previous user turns, have also been proposed in (Henderson et al., 2014c; . Of the current literature, this family of approaches have proven to be the most efficient for publicly available state tracking datasets. Recently, deep learning based models implementing this strategy Henderson et al., 2014a; Williams et al., 2016) have shown state of the art results. This approaches tends to leverage unsupervised training word representation (Mikolov et al., 2013) .",
"cite_spans": [
{
"start": 358,
"end": 383,
"text": "(Henderson et al., 2014c;",
"ref_id": "BIBREF12"
},
{
"start": 588,
"end": 612,
"text": "Henderson et al., 2014a;",
"ref_id": "BIBREF10"
},
{
"start": 613,
"end": 635,
"text": "Williams et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 749,
"end": 771,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dialog State Tracking",
"sec_num": "2.3"
},
{
"text": "Using error analysis (Henderson et al., 2014b) , three limitations can be observed in the application of these inference approaches. First, current models tend to fail at considering long-tail dependencies that occurs on dialogs. For example, coreferences, inter-utterances informations and correlations between slots have been shown to be difficult to handle even with the usage of recurrent network models (Henderson et al., 2014a) . To illustrate the inter-slot correlation, Figure 1 depicted the t-SNE (van der Maaten and Hinton, 2008) projected final state of the dialog of the DSTC-2 training set. On the other hand, reasoning capabilities, as required in machine reading applications (Poon and Domingos, 2010; Etzioni et al., 2007; Berant et al., 2014; remain absent in these classic formalizations of dialog state tracking. Finally, tracking definition is limited to the capability to instantiate a predefined set of slots. In the next section, we present a model of dialog state tracking that aims at leveraging the current advances of MemN2N, a memory-enhanced neural networks and their approximate reasoning capabilities that seems particularly adapted to the sequential, long range dependency equipped and sparse nature of complex dialog state tracking tasks. Furthermore, this model allows to relax the hypothesis of strict utterance-level annotation that does not corresponds to common practices in industrial applications of transactional conversational user interfaces where annotations tend to be placed at a multi-utterance level or full-dialog level only.",
"cite_spans": [
{
"start": 21,
"end": 46,
"text": "(Henderson et al., 2014b)",
"ref_id": "BIBREF11"
},
{
"start": 408,
"end": 433,
"text": "(Henderson et al., 2014a)",
"ref_id": "BIBREF10"
},
{
"start": 691,
"end": 716,
"text": "(Poon and Domingos, 2010;",
"ref_id": "BIBREF20"
},
{
"start": 717,
"end": 738,
"text": "Etzioni et al., 2007;",
"ref_id": "BIBREF4"
},
{
"start": 739,
"end": 759,
"text": "Berant et al., 2014;",
"ref_id": null
}
],
"ref_spans": [
{
"start": 478,
"end": 486,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Current Limitations",
"sec_num": "2.4"
},
{
"text": "We propose to formalize the dialog state tracking task as a machine reading problem (Etzioni et al., 2007; Berant et al., 2014) . In this section, we recall the main definitions of the task of machine reading, then describes the MemN2N, a memoryenhanced neural network architectures proposed to handle such tasks in the context of dialogs. Finally, we formalize the task of dialog state tracking as a machine reading problem and propose to solve it using a memory-enhanced neural architecture of inference.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Etzioni et al., 2007;",
"ref_id": "BIBREF4"
},
{
"start": 107,
"end": 127,
"text": "Berant et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Reading Formulation of Dialog State Tracking",
"sec_num": "3"
},
{
"text": "The task of textual understanding has recently been formulated as a supervised learning problem (Kumar et al., 2015; Hermann et al., 2015) . This task consists in estimating the conditional probability p(a|d, q) of an answer a to a question q where d is a document. Such an approach requires a large training corpus of {Document -Query -Answer} triples and until now such corpora have been limited to hundreds of examples (Richardson et al., 2013) . In the context of dialog state tracking, it can be understood as the capability of inferring a set of latent values l associated with a set of variables v related to a given dyadic or multi-party conversation d, from direct correlation and/or reasoning, using the course of exchanges of utterances, p(l|d, v).",
"cite_spans": [
{
"start": 96,
"end": 116,
"text": "(Kumar et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 117,
"end": 138,
"text": "Hermann et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 422,
"end": 447,
"text": "(Richardson et al., 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Reading",
"sec_num": "3.1"
},
{
"text": "State updates at an utterance-level are rarely provided off-the-shelf from a production environment. In these environments, annotation is often performed afterhand for the purpose of logging, monitoring or quality assessment. In the limit cases, as in human-to-human dialog systems, dialog-level annotations remains a common practice of annotation especially in personal assistance, customer care dialogs and, in a more general sense, industrial application of transactional conversational user interfaces. Another frequent setting consist of informing the state after a given number of utterance exchange between the locutors. So an additional effort of specific annotation is often needed in order to train a state of the art statistical state tracking model (Henderson et al., 2014b) . In that sense, formalizing dialog state tracking at a sub-dialog level in order to infer hidden state variables with respect to a list of utterances started from the first one to any given utterance of a given dialog seems particularly appropriate. In the context of dialog state tracking challenges, the DSTC-4 dialog corpus have been designed in such purpose but only consists of 22 dialogs. Concerning the DSTC-2 corpus, the training data contains 2207 dialogs (15611 turns) and the test set consists of 1117 dialogs (Williams et al., 2016) . This dataset is more suitable for our experiments.",
"cite_spans": [
{
"start": 761,
"end": 786,
"text": "(Henderson et al., 2014b)",
"ref_id": "BIBREF11"
},
{
"start": 1309,
"end": 1332,
"text": "(Williams et al., 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Reading",
"sec_num": "3.1"
},
{
"text": "For these reasons, the machine reading paradigm becomes a promising formulation for the general problem of dialog state tracking. Furthermore, current approaches and available datasets for state tracking do not explicitly cover reasoning capabilities such as temporal and spatial reasoning, counting, sorting and deduction. We suggest that in the future dataset dialogs expressing such specific abilities should be developed. In this last part, several reasoning enhancements are suggested to the DSTC-2 dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Reading",
"sec_num": "3.1"
},
{
"text": "The MemN2N architecture, introduced by , consists of two main components: supporting memories and final answer prediction. Supporting memories are in turn comprised of a set of input and output memory representations with memory cells. The input and output memory cells, denoted by m i and c i , are obtained by transforming the input context x 1 , . . . , x n (i.e a set of utterances) using two embedding matrices A and C (both of size d \u00d7|V | where d is the embedding size and |V | the vocabulary size) such that m i = A\u03a6(x i ) and c i = C\u03a6(x i ) where \u03a6(\u2022) is a function that maps the input into a bag of dimension |V |.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "Similarly, the question q is encoded using another embedding matrix B \u2208 R d\u00d7|V | , resulting in a question embedding u = B\u03a6(q). The input memories {m i }, together with the embedding of the question u, are utilized to determine the relevance of each of the stories in the context, yielding in a vector of attention weights",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p i = softmax(u m i )",
"eq_num": "(1)"
}
],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "where softmax(a i ) = e a i \u2211 i e a i . Subsequently, the response o from the output memory is constructed by the weighted sum:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "o = \u2211 i p i c i",
"eq_num": "(2)"
}
],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "Other models of parametric encoding for the question and the document have been proposed in (Kumar et al., 2015) . For the purpose of this presentation, we will keep with definition of \u03a6. For more difficult tasks requiring multiple supporting memories, the model can be extended to include more than one set of input/output memories by stacking a number of memory layers. In this setting, each memory layer is named a hop and the (k + 1) th hop takes as input the output of the k th hop:",
"cite_spans": [
{
"start": 92,
"end": 112,
"text": "(Kumar et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u k+1 = o k + u k",
"eq_num": "(3)"
}
],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "Lastly, the final step, the prediction of the answer to the question q, is performed b\u0177",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a = softmax(W (o K + u K ))",
"eq_num": "(4)"
}
],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "where\u00e2 is the predicted answer distribution, W \u2208 R |V |\u00d7d is a parameter matrix for the model to learn and K the total number of hops. Two weight tying schemes of the embedding matrices have been introduced in (Weston et al., 2015):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "1. Adjacent: the output embedding matrix in the k th hop is shared with the input embedding matrix in the (k + 1) th hop, i.e., A k+1 = C k for k \u2208 {1, K \u2212 1}. Also, the weight matrix W in Equation 4is shared with the output embedding matrix in the last memory hop such that W = C K . 2. Layer-wise: all the weight matrices A k and C k are shared across different hops, i.e., A 1 = A 2 = . . . = A K and C 1 = C 2 = . . . = C K . In the next section, we show how the task of dialog state tracking can be formalized as machine reading task and solved using such memory enhanced model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-End Memory Networks",
"sec_num": "3.2"
},
{
"text": "In this section, we formalize dialog state tracking using the paradigm of machine reading. As far as our knowledge goes, it is the first attempt to apply this approach and develop a specific dataset format, detailed in Section 4, from an existing and publicly available dialog state tracking challenge dataset to fulfill this task. Assuming (1) a dyadic dialog d composed of a list of utterances, (2) a state composed with (2a) a set of variables v i with i = {1, . . . , n}and (2b) a set of corresponding assigned values l i . One can define a question q v that corresponds to the specific querying of a variable in the context of a dialog",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "p(l i |q v i , d).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "In such context, a dialog state tracking task consists in determining for each variable",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "v, l * = arg max l i \u2208L p(l i |q v i , d), with L the specific domain of expression of a variable v i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "In addition to the actual dataset, we propose to investigate four general reasoning tasks using DSTC-2 dataset as a starting point. In such way, we leverage the dataset of DSTC-2 to create more complex reasoning task than the ones present in the original dialogs of the dataset by performing rule-based modification over the corpus. Obviously, the goal is to develop resolution algorithms that are not dedicated to a specific reasoning task but inference models that will be as generic as possible. In the rest of the section, each of the reasoning tasks associated with dialog state tracking are described and the generation protocol is explained with examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "Factoid Questions : This first task corresponds to the current formulation of dialog state tracking. It consists of questions where a previously given a set of supporting facts, potentially amongst a set of other irrelevant facts, provides the answer. This kind of task was already employed in (Weston et al., 2014) in the context of a virtual world. In that sense, the result obtained to such task are comparable with the state of the art approaches.",
"cite_spans": [
{
"start": 294,
"end": 315,
"text": "(Weston et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "Yes/No Questions : This task tests the ability of a model to answer true/false type questions like \"Is the food italian ?\". The conversion of a dialog to such format is deterministic regarding the fact that the utterances and corresponding true states are known at each utterance of a given dialog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "Indefinite Knowledge : This task tests a more complex natural language construction. It tests if statements can be models in order to describe possibilities rather than certainties, as proposed in (Weston et al., 2014) . In our case, the answer will be \"maybe\" to the question \"Is the price-range required moderate ?\" if the slot hasn't been mentioned yet throughout the current dialog. In the case of state tracking, it will allow to seamlessly deal with unknown information about the dialog state. Concretely, this set of questions and answers are generated has a super-set of the Yes-No Questions set. First, sub-dialog starting from the first utterance of a given dialog are extracted under the condition that a given slot is not informed in the corresponding annotation. Then, a questionanswering question is generated.",
"cite_spans": [
{
"start": 197,
"end": 218,
"text": "(Weston et al., 2014)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "Counting and Lists/Sets : This last task tests the capacity of the model to perform simple counting operations, by asking about the number of objects with a certain property, e.g. \"How many area are requested ?\". Similarly, the ability to produce a set of single word answers in the form of a list, e.g. \"What are the area requested ?\" is investigated. Table 1 give an example of each of the question type presented below on a dialog sample of DSTC-2 corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 360,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "Inference procedure: Concretely, the current set of utterances of a dialog will be placed into the memory using sentence based encoding and the Figure 2 : Illustration of the proposed MemN2N based state dialog tracker model with 3 hops.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "{x i } Utterances Question q \u03a3 B A 1 C 1 u 1 o 1 \u03a3 A 2 C 2 u 2 o 2 \u03a3 A 3 C 3 u 3 o 3 \u0174 a Predicted Answer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "question will be encoded as the controller state at t = 1. The answer will be produced using a softmax operation over the answer vocabulary that is supposed fixed. We consider this hypothesis valid in the case of factoid and list questions because the set of value for a given variable is often considered known. In the cases of Yes/No and Indefinite knowledge question, {Yes, No, Maybe} are added to the output vocabulary. Following (Weston et al., 2014), a list-task answer will be considered as a single element in the answer set and the count question. A possible alternative would be to change the activation function used at the output of the MemN2N from softmax activation function to a logistic one and to use a categorical cross entropy loss. A drawback of such alternative would be the necessity of cross-validating a decision threshold in order to select a eligible answers. Concerning the individual numbers for the count question set, the numbers founded on the training set are added into the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "We believe more reasoning capabilities need to be explore in the future, like spacial and temporal reasoning or deduction as suggested in . However, it will probably need the development of a new dedicated resource. Another alternative could be to develop a questionanswering annotation task based on a dialog corpus where reasoning task are present. The closest work to our proposal that can be cited is (Bordes and Weston, 2016) . In this paper, the authors defines a so-called End-to-End learnable dialog system to infer an answer from a finite set of eligible answers w.r.t the current list of utterances of the dialog. The authors generate 5 artificial tasks of dialog. However the reasoning capabilities are not explicitly addressed and the author explicitly claim that the resulting dialog system is not satisfactory yet. Indeed, we believe that having a proper dialog state tracker where a policy is built on top can guarantee dialog achievement by properly optimizing a reward function throughout a explicitly learnt dialog policy. In the case of proper end-toend systems, the objective function is still not explicitly defined (Serban et al., 2015) and the resulting systems tend to be used in the context of chatoriented and non-goal oriented dialog systems. In the next section, we present experimental details and results obtained on the basis of the DSTC-2 dataset and its conversion to the four mentioned reasoning tasks.",
"cite_spans": [
{
"start": 405,
"end": 430,
"text": "(Bordes and Weston, 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1137,
"end": 1158,
"text": "(Serban et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Reading Model for State Tracking",
"sec_num": "3.3"
},
{
"text": "In the DSTC-2 dialog corpus, a user queries a database of local restaurants by interacting with a dialog system. A dialog proceeds as follows: first, the user specifies constraints concerning the restaurant. Then, the system offers the name of a restaurant that satisfies the constraints. Finally, the user accepts the offer and requests additional information about the accepted restaurant. In this context, the dialog state tracker should be able to track several types of information that compose the state like the geographic area, the food type and the price range slots. In order to make comparable experiments, sub-dialogs generated from the first utterance to each utterance of each dialog of the corpus have been generated. The corresponding question-answer pairs have been generated using the annotated state for each of the subdialog. In the case of factoid question, this setting allows for fair comparison at the utterance-level state tracking gains with the prior art. The same protocol has been adopted for the generated reasoning task. In that sense, the tracker task consists In order to exhibit reasoning capability of the proposed model in the context of dialog state tracking, three other dataset have been automatically generated from the dialog corpus in order to support 3 capabilities of reasoning described in Section 3.3. Dialog modification has been required for two reasoning tasks, List and Count. Two types of rules have been developed to automatically produce modified dialogs. On a first hand, string matching has been performed to determine the position of a slot values in a given utterance and an alternative statement has been produced as a substitution. For example, the utterance \"I'm looking for a chinese restaurant in the north\" can be replaced by \"I'm looking for a chinese restaurant in the north or the west of town\". A second type of modification has been performed in an inter-utterance fashion. For example, assuming a given value \"north\" has been informed in the current state of a given dialog, one can add lately in the dialog a remark like \"I would also accept a place east side of town\". This kind of statement tends to not affect the overall flow of the dialog and allows to add richer semantic to the dialog. In the future, we plan to develop a richer set of generation procedures to augment the dataset. Nevertheless, we believe this simple dialog augmentation strategy allows to exhibit the competency of the proposed model beyond factoid questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Data Preprocessing",
"sec_num": "4.1"
},
{
"text": "As suggested in (Sukhbaatar et al., 2015) , 10% of the set was held-out to form a validation set for hyperparameter tuning. Concerning the utterance encoding, we use the so-called Temporal Encoding technique. In fact, reading tasks require some notion of temporal context. To enable the model to address them, the memory vector is modified as such m i = \u2211 j Ax i j + T A (i), where T A (i) is the i th row of a dedicated matrix T A that encodes temporal information. The output embedding is augmented in the same way with a matrix T c (e.g. c i = \u2211 j Cx i j + T C (i)). Both T A and T C are learned during training in an end-to-end fashion. They are also subject to the same sharing constraints as A and C. The embedding matrix A and B are initialized using GoogleNews word2vec embedding model (Mikolov et al., 2013) . Also suggested on (Sukhbaatar et al., 2015) , utterances are indexed in reverse order, reflecting their relative distance from the question so that x 1 is the last sentence of the dialog. Furthermore, adjacent weight tying schema has been adopted. Learning rate \u03b7 is initially assigned a value of 0.005 with exponential decay applied every 25 epochs by \u03b7/2 until 100 epochs are reached. Then, linear start is used in all our experiments as proposed by (Sukhbaatar et al., 2015) . More precisely, the softmax function in each memory layer is removed and re-inserted after 20 epochs. Batch size is set to 16 and gradients with an L 2 norm larger than 40 are divided by a scalar to have norm 40. All weights are initialized randomly from a Gaussian distribution with zero mean and \u03c3 = 0.1. In all our experiments, we have tested a set of the embedding size d \u2208 {20, 40, 60}.",
"cite_spans": [
{
"start": 16,
"end": 41,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 794,
"end": 816,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 837,
"end": 862,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 1271,
"end": 1296,
"text": "(Sukhbaatar et al., 2015)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "After validation, each model uses a 5-hops depth configuration. Table 3 presents tracking accuracy obtained for three variables of the DSTC2 dataset formulated as Factoid Question task. We compare with two established utterance-level discriminative neural trackers, a Recurrent Neural Network (RNN) model (Henderson et al., 2014a) and the Neural Belief Tracker . As suggested in this last work, the first RNN baseline model uses no semantic (i.e. synonym) dictionary, while Table 4 presents the performance obtained for the four reasoning tasks. The obtained results lead us to think that MemN2N are a competitive alternative for the task dialog state tracking but also increase the spectrum of definition of the general dialog state tracking task to machine reading and reasoning. In the future, we believe new reasoning capabilities like spacial and temporal reasoning and deduction should be exploited on the basis of a specifically designed dataset.",
"cite_spans": [
{
"start": 305,
"end": 330,
"text": "(Henderson et al., 2014a)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 64,
"end": 71,
"text": "Table 3",
"ref_id": "TABREF4"
},
{
"start": 474,
"end": 481,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.2"
},
{
"text": "This paper describes a novel method of dialog state tracking based on the paradigm of machine reading and solved using MemN2N, a memoryenhanced neural network architecture. In this context, a dataset format inspired from the current datasets of machine reading tasks has been developed for this task. It is the first attempt to solve this classic sub-problem of dialog management in Table 4 : Reasoning tasks : Acc. on DSTC2 reasoning datasets such way. Beyond the experimental results presented in the experimental section, the proposed approach offers several advantages compared to state of the art methods of tracking. First, the proposed method allows to perform tracking on the basis of segment-dialog-level annotation instead of utterance-level one that is commonly admitted in academic datasets but tedious to produce in a large scale industrial environment. Second, we propose to develop dialog corpus requiring reasoning capabilities to exhibit the potential of the proposed model. In future work, we plan to address more complex tasks like spatial and temporal reasoning, sorting or deduction and experiment with other memory enhanced inference models. Indeed, we plan to experiment and compare the same approach with Stacked-Augmented Recurrent Neural Network (Joulin and Mikolov, 2015) and Neural Turing Machine (Graves et al., 2014) that sounds also promising for these family of reasoning tasks.",
"cite_spans": [
{
"start": 1272,
"end": 1298,
"text": "(Joulin and Mikolov, 2015)",
"ref_id": "BIBREF14"
},
{
"start": 1325,
"end": 1346,
"text": "(Graves et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 383,
"end": 390,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Further Work",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Modeling biological processes for reading comprehension",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Clark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling biological processes for reading comprehension. In Alessandro Moschitti, Bo Pang, and Walter Daelemans, editors, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25- 29, 2014, Doha, Qatar, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning end-to-end goal-oriented dialog",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes and Jason Weston. 2016. Learning end-to-end goal-oriented dialog. CoRR.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards an ISO standard for dialogue act annotation",
"authors": [
{
"first": "Harry",
"middle": [],
"last": "Bunt",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Alexandersson",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Jae-Woong",
"middle": [],
"last": "Choe",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"Chengyu"
],
"last": "Fang",
"suffix": ""
},
{
"first": "Koiti",
"middle": [],
"last": "Hasida",
"suffix": ""
},
{
"first": "Kiyong",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Volha",
"middle": [],
"last": "Petukhova",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Soria",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Traum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harry Bunt, Jan Alexandersson, Jean Carletta, Jae- Woong Choe, Alex Chengyu Fang, Koiti Hasida, Kiyong Lee, Volha Petukhova, Andrei Popescu- Belis, Laurent Romary, Claudia Soria, and David Traum. 2010. Towards an ISO standard for dia- logue act annotation. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10). European Language Re- sources Association (ELRA), may.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Machine reading",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"J"
],
"last": "Cafarella",
"suffix": ""
}
],
"year": 2007,
"venue": "AAAI Spring Symposium: Machine Reading. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Michele Banko, and Michael J. Cafarella. 2007. Machine reading. In AAAI Spring Sympo- sium: Machine Reading. AAAI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Effective handling of dialogue state in the hidden information state POMDP-based dialogue manager",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2011,
"venue": "TSLP",
"volume": "7",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Milica Gasic and Steve Young. 2011. Effective handling of dialogue state in the hidden informa- tion state POMDP-based dialogue manager. TSLP, 7(3):4.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Factorial hidden Markov models",
"authors": [
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 1997,
"venue": "Machine Learning",
"volume": "29",
"issue": "",
"pages": "245--273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoubin Ghahramani and Michael I. Jordan. 1997. Fac- torial hidden Markov models. Machine Learning, 29(2-3):245-273.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Neural turing machines. CoRR",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Ivo",
"middle": [],
"last": "Danihelka",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. CoRR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Proceedings of the SIGDIAL 2013",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Steve Young, 2013. Proceedings of the SIGDIAL 2013",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conference, chapter Deep Neural Network Approach for the Dialog State Tracking Challenge",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "467--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference, chapter Deep Neural Network Ap- proach for the Dialog State Tracking Challenge, pages 467-471. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Robust dialosg state tracking using delexicalised recurrent neural networks and unsupervised adaptation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of IEEE Spoken Language Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Henderson, B. Thomson, and S. J. Young. 2014a. Robust dialosg state tracking using delexicalised re- current neural networks and unsupervised adapta- tion. In Proceedings of IEEE Spoken Language Technology.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The third dialog state tracking challenge",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "SLT",
"volume": "",
"issue": "",
"pages": "324--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Jason D. Williams. 2014b. The third dialog state tracking challenge. In SLT, pages 324-329. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word-based dialog state tracking with recurrent neural networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SIGDial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Henderson, Blaise Thomson, and Steve Young. 2014c. Word-based dialog state tracking with recurrent neural networks. In Proceedings of SIGDial.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Kocisk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1s Kocisk\u00fd, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. CoRR.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Inferring algorithmic patterns with stack-augmented recurrent nets",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "190--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin and Tomas Mikolov. 2015. Infer- ring algorithmic patterns with stack-augmented re- current nets. In Corinna Cortes, Neil D. Lawrence, Daniel D. Lee, Masashi Sugiyama, and Roman Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7- 12, 2015, Montreal, Quebec, Canada, pages 190- 198.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Ask me anything: Dynamic memory networks for natural language processing",
"authors": [
{
"first": "Ankit",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "English",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Ondruska",
"suffix": ""
},
{
"first": "Ishaan",
"middle": [],
"last": "Gulrajani",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankit Kumar, Ozan Irsoy, Jonathan Su, James Brad- bury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. 2015. Ask me anything: Dynamic memory networks for natu- ral language processing. CoRR.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Discriminative state tracking for spoken dialog systems",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Metallinou",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Bohus",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2013,
"venue": "The Association for Computer Linguistics",
"volume": "",
"issue": "",
"pages": "466--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Metallinou, Dan Bohus, and Jason Williams. 2013. Discriminative state tracking for spoken dia- log systems. In Association for Computer Linguis- tics, pages 466-475. The Association for Computer Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their compo- sitionality. In Christopher J. C. Burges, L\u00e9on Bot- tou, Zoubin Ghahramani, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural In- formation Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111-3119.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Prosodic and accentual information for automatic speech recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Diego",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"J"
],
"last": "Milone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rubio",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "11",
"issue": "4",
"pages": "321--333",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diego H. Milone and Antonio J. Rubio. 2003. Prosodic and accentual information for automatic speech recognition. IEEE Transactions on Speech and Audio Processing, 11(4):321-333.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural belief tracker: Data-driven dialogue state tracking",
"authors": [
{
"first": "Nikola",
"middle": [],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Tsung-Hsien",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikola Mrksic, Diarmuid\u00d3 S\u00e9aghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve J. Young. 2016. Neural belief tracker: Data-driven dialogue state tracking. CoRR, abs/1606.03777.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Machine reading: A \"killer app\" for statistical relational AI",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pedro",
"middle": [
"M"
],
"last": "Domingos",
"suffix": ""
}
],
"year": 2010,
"venue": "Statistical Relational Artificial Intelligence, volume WS-10-06 of AAAI Workshops",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Pedro M. Domingos. 2010. Ma- chine reading: A \"killer app\" for statistical relational AI. In Statistical Relational Artificial Intelligence, volume WS-10-06 of AAAI Workshops. AAAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Efficient probabilistic tracking of user goal and dialog history for spoken dialog systems",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Raux",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2011,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "801--804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Raux and Yi Ma. 2011. Efficient probabilistic tracking of user goal and dialog history for spoken dialog systems. In INTERSPEECH, pages 801-804. ISCA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "MCTest: A challenge dataset for the open-domain machine comprehension of text",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Renshaw",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, pages 193-203. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A survey of available corpora for building data-driven dialogue systems",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Vlad Serban",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Charlin",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2015. A survey of available corpora for building data-driven dialogue systems. CoRR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "End-to-end memory networks",
"authors": [
{
"first": "Sainbayar",
"middle": [],
"last": "Sukhbaatar",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Fergus",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory net- works. In Corinna Cortes, Neil D. Lawrence,",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems",
"authors": [
{
"first": "Daniel",
"middle": [
"D"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Sugiyama",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Garnett",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "2440--2448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel D. Lee, Masashi Sugiyama, and Roman Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7- 12, 2015, Montreal, Quebec, Canada, pages 2440- 2448.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems",
"authors": [
{
"first": "Blaise",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2010,
"venue": "Computer Speech & Language",
"volume": "24",
"issue": "4",
"pages": "562--588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blaise Thomson and Steve Young. 2010. Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems. Computer Speech & Lan- guage, 24(4):562-588.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, November.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A network-based end-to-end trainable task-oriented dialogue system",
"authors": [
{
"first": "Milica",
"middle": [],
"last": "Tsung-Hsien Wen",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Gasic",
"suffix": ""
},
{
"first": "Lina",
"middle": [
"Maria"
],
"last": "Mrksic",
"suffix": ""
},
{
"first": "Pei-Hao",
"middle": [],
"last": "Rojas-Barahona",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Ultes",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Vandyke",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-Hao Su, Stefan Ultes, David Vandyke, and Steve J. Young. 2016. A network-based end-to-end trainable task-oriented dialogue system. CoRR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Towards AI-complete question answering: A set of prerequisite toy tasks",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete ques- tion answering: A set of prerequisite toy tasks. CoRR.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Factored partially observable markov decision processes for dialogue management",
"authors": [
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Poupart",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2005,
"venue": "4th Workshop on Knowledge and Reasoning in Practical Dialog Systems",
"volume": "",
"issue": "",
"pages": "76--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams, Pascal Poupart, and Steve Young. 2005. Factored partially observable markov deci- sion processes for dialogue management. In In 4th Workshop on Knowledge and Reasoning in Practi- cal Dialog Systems, pages 76-82.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The dialog state tracking challenge series: A review",
"authors": [
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Raux",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Henderson",
"suffix": ""
}
],
"year": 2016,
"venue": "D&D",
"volume": "7",
"issue": "3",
"pages": "4--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams, Antoine Raux, and Matthew Hen- derson. 2016. The dialog state tracking challenge series: A review. D&D, 7(3):4-33.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Web-style ranking and slu combination for dialog state tracking",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of SIGDIAL. ACL Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason D. Williams. 2014. Web-style ranking and slu combination for dialog state tracking. In Proceed- ings of SIGDIAL. ACL Association for Computa- tional Linguistics, June.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A speechdriven second screen application for TV program discovery",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Yeh",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Adwait",
"middle": [],
"last": "Jarrold",
"suffix": ""
},
{
"first": "Deepak",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"F"
],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Patel-Schneider",
"suffix": ""
},
{
"first": "Nirvana",
"middle": [],
"last": "Laverty",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Tikku",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mendel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3010--3016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Z. Yeh, Benjamin Douglas, William Jarrold, Ad- wait Ratnaparkhi, Deepak Ramachandran, Peter F. Patel-Schneider, Stephen Laverty, Nirvana Tikku, Sean Brown, and Jeremy Mendel. 2014. A speech- driven second screen application for TV program discovery. In Carla E. Brodley and Peter Stone, ed- itors, Proceedings of the Twenty-Eighth AAAI Con- ference on Artificial Intelligence, July 27 -31, 2014, Qu\u00e9bec City, Qu\u00e9bec, Canada, pages 3010-3016. AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "T-SNE transformation of the final state of DSTC-2 train set.",
"num": null
},
"TABREF0": {
"text": "a cheap restaurant in the west or east part of town. 2 Agent Thanh Binh is a nice restaurant in the west of town in the cheap price range.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">Index Actor Utterance</td></tr><tr><td colspan=\"3\">1 Im looking for 3 Cust Cust What is the address and post code.</td></tr><tr><td>4</td><td colspan=\"2\">Agent Thanh Binh is on magdalene street city centre.</td></tr><tr><td>5</td><td>Cust</td><td>Thank you goodbye.</td></tr><tr><td>6</td><td colspan=\"2\">Factoid Question What is the pricerange ? Answer: {Cheap}</td></tr><tr><td>7</td><td colspan=\"2\">Yes/No Question Is the Pricerange Expensive ? Answer: {No}</td></tr><tr><td>8</td><td colspan=\"2\">Indefinite Knowledge Is the FoodType chinese ? Answer: {Maybe}</td></tr><tr><td>8</td><td colspan=\"2\">Listing task What are the areas ? Answer: {West,East}</td></tr></table>"
},
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>: : Dialog state tracking question-answering examples from DSTC2 dataset</td></tr><tr><td>in finding the value l * as defined in Section 3.3. In</td></tr><tr><td>the overall dialog corpus, Area slot counts 5 pos-</td></tr><tr><td>sible values, Food slot counts 91 possible values</td></tr><tr><td>and Pricerange slot counts 3 possible values.</td></tr></table>"
},
"TABREF3": {
"text": "Attention shifting example for the Area slot from DSTC2 dataset, the values corresponds the p i values affected to each memory block m i at each hop of the MemN2N the improved baseline uses a hand-crafted semantic dictionary designed for the DSTC2 ontology. In this context, a MemN2N model allows to obtain competitive results with the most close, nonmemory enhanced, state of the art approach of recurrent neural network with word embedding as prior knowledge.",
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>Model</td><td>Area Food Price Joint</td></tr><tr><td>RNN -no dict.</td><td>0.92 0.86 0.86 0.69</td></tr><tr><td>RNN + sem. dict.</td><td>0.91 0.86 0.93 0.73</td></tr><tr><td>NBT-DNN</td><td>0.90 0.84 0.94 0.72</td></tr><tr><td>NBT-CNN</td><td>0.90 0.83 0.93 0.72</td></tr><tr><td colspan=\"2\">MemN2N(d = 40) 0.89 0.88 0.95 0.74</td></tr></table>"
},
"TABREF4": {
"text": "One supporting fact task : Acc. obtained on DSTC2 test set As a second result,",
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>"
}
}
}
}