Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W08-0119",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:40:27.067152Z"
},
"title": "Training and Evaluation of the HIS POMDP Dialogue System in Noise",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ga\u0161i\u0107",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "S",
"middle": [],
"last": "Keizer",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "F",
"middle": [],
"last": "Mairesse",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "J",
"middle": [],
"last": "Schatzmann",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "B",
"middle": [],
"last": "Thomson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "K",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
},
{
"first": "S",
"middle": [],
"last": "Young",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cambridge University United Kingdom",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper investigates the claim that a dialogue manager modelled as a Partially Observable Markov Decision Process (POMDP) can achieve improved robustness to noise compared to conventional state-based dialogue managers. Using the Hidden Information State (HIS) POMDP dialogue manager as an exemplar, and an MDP-based dialogue manager as a baseline, evaluation results are presented for both simulated and real dialogues in a Tourist Information Domain. The results on the simulated data show that the inherent ability to model uncertainty, allows the POMDP model to exploit alternative hypotheses from the speech understanding system. The results obtained from a user trial show that the HIS system with a trained policy performed significantly better than the MDP baseline.",
"pdf_parse": {
"paper_id": "W08-0119",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper investigates the claim that a dialogue manager modelled as a Partially Observable Markov Decision Process (POMDP) can achieve improved robustness to noise compared to conventional state-based dialogue managers. Using the Hidden Information State (HIS) POMDP dialogue manager as an exemplar, and an MDP-based dialogue manager as a baseline, evaluation results are presented for both simulated and real dialogues in a Tourist Information Domain. The results on the simulated data show that the inherent ability to model uncertainty, allows the POMDP model to exploit alternative hypotheses from the speech understanding system. The results obtained from a user trial show that the HIS system with a trained policy performed significantly better than the MDP baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conventional spoken dialogue systems operate by finding the most likely interpretation of each user input, updating some internal representation of the dialogue state and then outputting an appropriate response. Error tolerance depends on using confidence thresholds and where they fail, the dialogue manager must resort to quite complex recovery procedures. Such a system has no explicit mechanisms for representing the inevitable uncertainties associated with speech understanding or the ambiguities which naturally arise in interpreting a user's intentions. The result is a system that is inherently fragile, especially in noisy conditions or where the user is unsure of how to use the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It has been suggested that Partially Observable Markov Decision Processes (POMDPs) offer a natural framework for building spoken dialogue systems which can both model these uncertainties and support policies which are robust to their effects (Young, 2002; Williams and Young, 2007a) . The key idea of the POMDP is that the underlying dialogue state is hidden and dialogue management policies must therefore be based not on a single state estimate but on a distribution over all states.",
"cite_spans": [
{
"start": 242,
"end": 255,
"text": "(Young, 2002;",
"ref_id": "BIBREF10"
},
{
"start": 256,
"end": 282,
"text": "Williams and Young, 2007a)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whilst POMDPs are attractive theoretically, in practice, they are notoriously intractable for anything other than small state/action spaces. Hence, practical examples of their use were initially restricted to very simple domains (Roy et al., 2000; Zhang et al., 2001 ). More recently, however, a number of techniques have been suggested which do allow POMDPs to be scaled to handle real world tasks. The two generic mechanisms which facilitate this scaling are factoring the state space and performing policy optimisation in a reduced summary state space (Williams and Young, 2007a ; Williams and Young, 2007b) .",
"cite_spans": [
{
"start": 229,
"end": 247,
"text": "(Roy et al., 2000;",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 266,
"text": "Zhang et al., 2001",
"ref_id": "BIBREF11"
},
{
"start": 555,
"end": 581,
"text": "(Williams and Young, 2007a",
"ref_id": "BIBREF8"
},
{
"start": 584,
"end": 610,
"text": "Williams and Young, 2007b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Based on these ideas, a number of real-world POMDP-based systems have recently emerged. The most complex entity which must be represented in the state space is the user's goal. In the Bayesian Update of Dialogue State (BUDS) system, the user's goal is further factored into conditionally independent slots. The resulting system is then modelled as a dynamic Bayesian network (Thomson et al., 2008) . A similar approach is also developed in (Bui et al., 2007a; Bui et al., 2007b ). An alternative approach taken in the Hidden Information State (HIS) system is to retain a complete representation of the user's goal, but partition states into equivalence classes and prune away very low probability partitions Williams and Young, 2007b) .",
"cite_spans": [
{
"start": 375,
"end": 397,
"text": "(Thomson et al., 2008)",
"ref_id": "BIBREF7"
},
{
"start": 440,
"end": 459,
"text": "(Bui et al., 2007a;",
"ref_id": "BIBREF0"
},
{
"start": 460,
"end": 477,
"text": "Bui et al., 2007b",
"ref_id": "BIBREF1"
},
{
"start": 708,
"end": 734,
"text": "Williams and Young, 2007b)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Whichever approach is taken, a key issue in a real POMDP-based dialogue system is its ability to be robust to noise and that is the issue that is addressed in this paper. Using the HIS system as an exemplar, evaluation results are presented for a real-world tourist information task using both simulated and real users. The results show that a POMDP system can learn noise robust policies and that N-best outputs from the speech understanding component can be exploited to further improve robustness.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. Firstly, in Section 2 a brief overview of the HIS system is given. Then in Section 3, various POMDP training regimes are described and evaluated using a simulated user at differing noise levels. Section 4 then presents results from a trial in which users conducted various tasks over a range of noise levels. Finally, in Section 5, we discuss our results and present our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A POMDP-based dialogue system is shown in Figure 1 where s m denotes the (unobserved or hidden) machine state which is factored into three components: the last user act a u , the user's goal s u and the dialogue history s d . Since s m is unknown, at each time-step the system computes a belief state such that the probability of being in state s m given belief state b is b(s m ). Based on this current belief state b, the machine selects an action a m , receives a reward r(s m , a m ), and transitions to a new (unobserved) state s \u2032 m , where s \u2032 m depends only on s m and a m . The machine then receives an observation o \u2032 consisting of an N-best list of hypothesised user actions. Finally, the belief distribution b is updated based on o \u2032 and a m as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Basic Principles",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "b \u2032 (s \u2032 m ) = kP (o \u2032 |s \u2032 m , a m ) sm\u2208Sm P (s \u2032 m |a m , s m )b(s m )",
"eq_num": "(1)"
}
],
"section": "Basic Principles",
"sec_num": "2.1"
},
{
"text": "where k is a normalisation constant (Kaelbling et al., 1998) . The first term on the RHS of (1) is called the observation model and the term inside the summation is called the transition model. Maintaining this belief state as the dialogue evolves is called belief monitoring. At each time step t, the machine receives a reward r(b t , a m,t ) based on the current belief state b t and the selected action a m,t . Each action a m,t is determined by a policy \u03c0(b t ) and building a POMDP system involves finding the policy \u03c0 * which maximises the discounted sum R of the rewards",
"cite_spans": [
{
"start": 36,
"end": 60,
"text": "(Kaelbling et al., 1998)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Principles",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = \u221e t=0 \u03bb t r(b t , a m,t )",
"eq_num": "(2)"
}
],
"section": "Basic Principles",
"sec_num": "2.1"
},
{
"text": "where \u03bb t is a discount coefficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Principles",
"sec_num": "2.1"
},
{
"text": "In ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Models",
"sec_num": "2.2"
},
{
"text": "b \u2032 (p \u2032 , a \u2032 u , s \u2032 d ) = k \u2022 P (o \u2032 |a \u2032 u ) observation model P (a \u2032 u |p \u2032 , a m ) user action model \u2022 s d P (s \u2032 d |p \u2032 , a \u2032 u , s d , a m ) dialogue model P (p \u2032 |p)b(p, s d ) partition splitting (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Models",
"sec_num": "2.2"
},
{
"text": "where p is the parent of p \u2032 . In this equation, the observation model is approximated by the normalised distribution of confidence measures output by the speech recognition system. The user action model allows the observation probability that is conditioned on a \u2032 u to be scaled by the probability that the user would speak a \u2032 u given the partition p \u2032 and the last system prompt a m . In the current implementation of the HIS system, user dialogue acts take the form act(a = v) where act is the dialogue type, a is an attribute and v is its value [for example, request(food=Chinese)]. The user action model is then approximated by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Models",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (a \u2032 u |p \u2032 , a m ) \u2248 P (T (a \u2032 u )|T (a m ))P (M(a \u2032 u )|p \u2032 )",
"eq_num": "(4)"
}
],
"section": "Probability Models",
"sec_num": "2.2"
},
{
"text": "where T (\u2022) denotes the type of the dialogue act and M(\u2022) denotes whether or not the dialogue act matches the current partition p \u2032 . The dialogue model is a deterministic encoding based on a simple grounding model. It yields probability one when the updated dialogue hypothesis (i.e., a specific combination of p \u2032 , a \u2032 u , s d and a m ) is consistent with the history and zero otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probability Models",
"sec_num": "2.2"
},
{
"text": "Policy representation in POMDP-systems is nontrivial since each action depends on a complex probability distribution. One of the simplest approaches to dealing with this problem is to discretise the state space and then associate an action with each discrete grid point. To reduce quantisation errors, the HIS model first maps belief distributions into a reduced summary space before quantising. This summary space consists of the probability of the top two hypotheses plus some status variables and the user act type associated with the top distribution. Quantisation is then performed using a simple distance metric to find the nearest grid point. Actions in summary space refer specifically to the top two hypotheses, and unlike actions in master space, they are limited to a small finite set: greet, ask, explicit confirm, implicit confirm, select confirm, offer, inform, find alternative, query more, goodbye. A simple heuristic is then used to map the selected next system action back into the full master belief space. The dialogue manager is able to support negations, denials and requests for alternatives. When the selected summary action is to offer the user a venue, the summary-to-master space mapping heuristics will normally offer a venue consistent with the most likely user goal hypothesis. If this hypothesis is then rejected its belief is substantially reduced and it will no longer be the top-ranking hypothesis. If the next system action is to make an alternative offer, then the new top-ranking hypothesis may not be appropriate. For example, if an expensive French restaurant near the river had been offered and the user asks for one nearer the centre of town, any alternative offered should still include the user's confirmed desire for an expensive French restaurant. To ensure this, all of the grounded features from the rejected hypothesis are extracted and all user goal hypotheses are scanned starting at the most likely until an alternative is found that matches the grounded features. For the current turn only, the summary-tomaster space heuristics then treat this hypothesis as if it was the top-ranking one. If the system then offers a venue based on this hypothesis, and the user accepts it, then, since system outputs are appended to user inputs for the purpose of belief updating, the alternative hypothesis will move to the top, or near the top, of the ranked hypothesis list. The dialogue then typically continues with its focus on the newly offered alternative venue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Representation",
"sec_num": "2.3"
},
{
"text": "1 Observation From User Ontology Rules 2 N u a m a ~ From System 1 1 2 2 2 1 d s 2 d s 1 d s 2 d s 3 d s 1 u p 2 u p 3 u p POMDP Policy 2 h 3 h 4 h 5 h 1 h 1 2 2 2 1 p 2 p 3 p ~ a u ~ a u ~ a u ~ a u ~ a u ~ a u ~ a u Belief State Application Database Action Refinement (heuristic) m a ^",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy Representation",
"sec_num": "2.3"
},
{
"text": "To summarise, the overall processing performed by the HIS system in a single dialogue turn (i.e. one cycle of system output and user response) is as shown in Figure 2 . Each user utterance is decoded into an N-best list of dialogue acts. Each incoming act plus the previous system act are matched against the forest of user goals and partitions are split as needed. Each user act a u is then duplicated and bound to each partition p. Each partition will also have a set of dialogue histories s d associated with it. The combination of each p, a u and updated s d forms a new dialogue hypothesis h k whose beliefs are evaluated using (3). Once all dialogue hypotheses have been evaluated and any duplicates merged, the master belief state b is mapped into summary spaceb and the nearest policy belief point is found. The associated summary space machine action\u00e2 m is then heuristically mapped back to master space and the machine's actual response a m is output. The cycle then repeats until the user's goal is satisfied.",
"cite_spans": [],
"ref_spans": [
{
"start": 158,
"end": 166,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Summary of Operation",
"sec_num": "2.4"
},
{
"text": "Policy optimisation is performed in the discrete summary space described in the previous section using on-line batch \u01eb-greedy policy iteration. Given an existing policy \u03c0, dialogs are executed and machine actions generated according to \u03c0 except that with probability \u01eb a random action is generated. The system maintains a set of belief points {b i }. At each turn in training, the nearest stored belief pointb k t\u00f4 b is located using a distance measure. If the distance is greater than some threshold,b is added to the set of stored belief points. The sequence of pointsb k traversed in each dialogue is stored in a list. Associated with eachb i is a function Q(b i ,\u00e2 m ) whose value is the expected total reward obtained by choosing summary action\u00e2 m from stateb i . At the end of each dialogue, the total reward is calculated and added to an accumulator for each point in the list, discounted by \u03bb at each step. On completion of a batch of dialogs, the Q values are updated according to the accumulated rewards, and the policy updated by choosing the action which maximises each Q value. The whole process is then repeated until the policy stabilises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy optimisation",
"sec_num": "3.1"
},
{
"text": "In our experiments, \u01eb was fixed at 0.1 and \u03bb was fixed at 0.95. The reward function used attempted to encourage short successful dialogues by assigning +20 for a successful dialogue and \u22121 for each dialogue turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Policy optimisation",
"sec_num": "3.1"
},
{
"text": "To train a policy, a user simulator is used to generate responses to system actions. It has two main components: a User Goal and a User Agenda. At the start of each dialogue, the goal is randomly initialised with requests such as \"name\", \"addr\", \"phone\" and constraints such as \"type=restaurant\", \"food=Chinese\", etc. The agenda stores the dialogue acts needed to elicit this information in a stack-like structure which enables it to temporarily store actions when another action of higher priority needs to be issued first. This enables the simulator to refer to previous dialogue turns at a later point. To generate a wide spread of realistic dialogs, the simulator reacts wherever possible with varying levels of patience and arbitrariness. In addition, the simulator will relax its constraints when its initial goal cannot be satisfied. This allows the dialogue manager to learn negotiation-type dialogues where only an approximate solution to the user's goal exists. Speech understanding errors are simulated at the dialogue act level using confusion matrices trained on labelled dialogue data .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "User Simulation",
"sec_num": "3.2"
},
{
"text": "When training a system to operate robustly in noisy conditions, a variety of strategies are possible. For example, the system can be trained only on noisefree interactions, it can be trained on increasing levels of noise or it can be trained on a high noise level from the outset. A related issue concerns the generation of grid points and the number of training iterations to perform. For example, allowing a very large number of points leads to poor performance due to over-fitting of the training data. Conversely, having too few point leads to poor performance due to a lack of discrimination in its dialogue strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "3.3"
},
{
"text": "After some experimentation, the following training schedule was adopted. Training starts in a noise free environment using a small number of grid points and it continues until the performance of the policy levels off. The resulting policy is then taken as an initial policy for the next stage where the noise level is increased, the number of grid points is expanded and the number of iterations is increased. This process is repeated until the highest noise level is reached. This approach was motivated by the observation that a key factor in effective reinforcement learning is the balance between exploration and exploitation. In POMDP policy optimisation which uses dynamically allocated grid points, maintaining this balance is crucial. In our case, the noise introduced by the simulator is used as an implicit mechanism for increasing the exploration. Each time exploration is increased, the areas of state-space that will be visited will also increase and hence the number of available grid points must also be increased. At the same time, the number of iterations must be increased to ensure that all points are visited a sufficient number of times. In practice we found that around 750 to 1000 grid points was sufficient and the total number of simulated dialogues needed for training was around 100,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "3.3"
},
{
"text": "A second issue when training in noisy conditions is whether to train on just the 1-best output from the simulator or train on the N-best outputs. A limiting factor here is that the computation required for N-best training is significantly increased since the rate of partition generation in the HIS model increases exponentially with N. In preliminary tests, it was found that when training with 1-best outputs, there was little difference between policies trained entirely in no noise and policies trained on increasing noise as described above. However, policies trained on 2-best using the incremental strategy did exhibit increased robustness to noise. To illustrate this, Figures 3 and 4 show the average dialogue success rates and rewards for 3 different policies, all trained on 2-best: a hand-crafted policy (hdc), a policy trained on noise-free conditions (noise free) and a policy trained using the incremental scheme described above (increm). Each policy was tested using 2-best output from the simulator across a range of error rates. In addition, the noise-free policy was also tested on 1-best output. As can be seen, both the trained policies improve significantly on the hand-crafted policies. Furthermore, although the average rewards are all broadly similar, the success rate of the incrementally trained policy is significantly better at higher error rates. Hence, this latter policy was selected for the user trial described next.",
"cite_spans": [],
"ref_spans": [
{
"start": 677,
"end": 692,
"text": "Figures 3 and 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Training and Evaluation",
"sec_num": "3.3"
},
{
"text": "The HIS-POMDP policy (HIS-TRA) that was incrementally trained on the simulated user using 2-best lists was tested in a user trial together with a handcrafted HIS-POMDP policy (HIS-HDC). The strategy used by the latter was to first check the most likely hypothesis. If it contains sufficient grounded keys to match 1 to 3 database entities, then offer is selected. If any part of the hypothesis is inconsistent or the user has explicitly asked for another suggestion, then find alternative action is selected. If the user has asked for information about an offered entity then inform is selected. Otherwise, an ungrounded component of the top hypothesis is identified and depending on the belief, one of the confirm actions is selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation via a User Trial",
"sec_num": "4"
},
{
"text": "In addition, an MDP-based dialogue manager developed for earlier trials (Schatzmann, 2008) was also tested. Since considerable effort has been put in optimising this system, it serves as a strong baseline for comparison. Again, both a trained policy (MDP-TRA) and a hand-crafted policy (MDP-HDC) were tested.",
"cite_spans": [
{
"start": 72,
"end": 90,
"text": "(Schatzmann, 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation via a User Trial",
"sec_num": "4"
},
{
"text": "The dialogue system consisted of an ATK-based speech recogniser, a Phoenix-based semantic parser, the dialogue manager and a diphone based speech synthesiser. The semantic parser uses simple phrasal grammar rules to extract the dialogue act type and a list of attribute/value pairs from each utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System setup and confidence scoring",
"sec_num": "4.1"
},
{
"text": "In a POMDP-based dialogue system, accurate belief-updating is very sensitive to the confidence scores assigned to each user dialogue act. Ideally these should provide a measure of the probability of the decoded act given the true user act. In the evaluation system, the recogniser generates a 10-best list of hypotheses at each turn along with a compact confusion network which is used to compute the inference evidence for each hypothesis. The latter is defined as the sum of the log-likelihoods of each arc in the confusion network and when exponentiated and renormalised this gives a simple estimate of the probability of each hypothesised utterance. Each utterance in the 10-best list is passed to the semantic parser. Equivalent dialogue acts output by the parser are then grouped together and the dialogue act for each group is then assigned the sum of the sentencelevel probabilities as its confidence score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System setup and confidence scoring",
"sec_num": "4.1"
},
{
"text": "For the trial itself, 36 subjects were recruited (all British native speakers, 18 male, 18 female). Each subject was asked to imagine himself to be a tourist in a fictitious town called Jasonville and try to find particular hotels, bars, or restaurants in that town. Each subject was asked to complete a set of predefined tasks where each task involved finding the name of a venue satisfying a set of constraints such as food type is Chinese, price-range is cheap, etc., and getting the value of one or more additional attributes of that venue such as the address or the phone number.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trial setup",
"sec_num": "4.2"
},
{
"text": "For each task, subjects were given a scenario to read and were then asked to solve the task via a dialogue with the system. The tasks set could either have one solution, several solutions, or no solution at all in the database. In cases where a subject found that there was no matching venue for the given task, he/she was allowed to try and find an alternative venue by relaxing one or more of the constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trial setup",
"sec_num": "4.2"
},
{
"text": "In addition, subjects had to perform each task at one of three possible noise levels. These levels correspond to signal/noise ratios (SNRs) of 35.3 dB (low noise), 10.2 dB (medium noise), or 3.3 dB (high noise). The noise was artificially generated and mixed with the microphone signal, in addition it was fed into the subject's headphones so that they were aware of the noisy conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trial setup",
"sec_num": "4.2"
},
{
"text": "An instructor was present at all times to indicate to the subject which task description to follow, and to start the right system with the appropriate noiselevel. Each subject performed an equal number of tasks for each system (3 tasks), noise level (6 tasks) and solution type (6 tasks for each of the types 0, 1, or multiple solutions). Also, each subject performed one task for all combinations of system and noise level. Overall, each combination of system, noise level, and solution type was used in an equal number of dialogues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trial setup",
"sec_num": "4.2"
},
{
"text": "In Table 1 , some general statistics of the corpus resulting from the trial are given. The semantic error rate is based on substitutions, insertions and deletions errors on semantic items. When tested after the trial on the transcribed user utterances, the semantic error rate was 4.1% whereas the semantic error rate on the ASR input was 25.2%. This means that 84% of the error rate was due to the ASR.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Tables 2 and 3 present success rates (Succ.) and average performance scores (Perf.), comparing the two HIS dialogue managers with the two MDP base- line systems. For the success rates, also the standard deviation (std.dev) is given, assuming a binomial distribution. The success rate is the percentage of successfully completed dialogues. A task is considered to be fully completed when the user is able to find the venue he is looking for and get all the additional information he asked for; if the task has no solution and the system indicates to the user no venue could be found, this also counts as full completion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "A task is considered to be partially completed when only the correct venue has been given. The results on partial completion are given in Table 2 , and the results on full completion in Table 3 . To mirror the reward function used in training, the performance for each dialogue is computed by assigning a reward of 20 points for full completion and subtracting 1 point for the number of turns up until a successful recommendation (i.e., partial completion). The results show that the trained HIS dialogue manager significantly outperforms both MDP based dialogue managers. For success rate on partial completion, both HIS systems perform better than the MDP systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 186,
"end": 193,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In the user trial, the subjects were also asked for a subjective judgement of the systems. After completing each task, the subjects were asked whether they had found the information they were looking for (yes/no). They were also asked to give a score on a scale from 1 to 5 (best) on how natural/intuitive they thought the dialogue was. Table 4 shows the results for the 4 systems used. The performance of the HIS systems is similar to the MDP systems, with a slightly higher success rate for the trained one and a slightly lower score for the handcrafted one.",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Subjective Results",
"sec_num": "4.3.1"
},
{
"text": "Succ ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "This paper has described recent work in training a POMDP-based dialogue manager to exploit the additional information available from a speech understanding system which can generate ranked lists of hypotheses. Following a brief overview of the Hidden Information State dialogue manager and policy optimisation using a user simulator, results have been given for both simulated user and real user dialogues conducted at a variety of noise levels. The user simulation results have shown that although the rewards are similar, training with 2-best rather than 1-best outputs from the user simulator yields better success rates at high noise levels. In view of this result, we would have liked to investigate training on longer N-best lists, but currently computational constraints prevent this. We hope in the future to address this issue by developing more efficient state partitioning strategies for the HIS system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "The overall results on real data collected from the user trial clearly indicate increased robustness by the HIS system. We would have liked to be able to plot performance and success scores as a function of noise level or speech understanding error rate, but there is great variability in these kinds of complex real-world dialogues and it transpired that the trial data was insufficient to enable any statistically meaningful presentation of this form. We estimate that we need at least an order of magnitude more trial data to properly investigate the behaviour of such systems as a function of noise level. The trial described here, including transcription and analysis consumed about 30 man-days of effort. Increasing this by a factor of 10 or more is not therefore an option for us, and clearly an alternative approach is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We have also reported results of subjective success rate and opinion scores based on data obtained from subjects after each trial. The results were only weakly correlated with the measured performance and success rates. We believe that this is partly due to confusion as to what constituted success in the minds of the subjects. This suggests that for subjective results to be meaningful, measurements such as these will only be really useful if made on live systems where users have a real rather than imagined information need. The use of live systems would also alleviate the data sparsity problem noted earlier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Finally and in conclusion, we believe that despite the difficulties noted above, the results reported in this paper represent a first step towards establishing the POMDP as a viable framework for developing spoken dialogue systems which are significantly more robust to noisy operating conditions than conventional state-based systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This research was partly funded by the UK EPSRC under grant agreement EP/F013930/1 and by the EU FP7 Programme under grant agreement 216594 (CLASSIC project: www.classic-project.org).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A tractable DDN-POMDP Approach to Affective Dialogue Modeling for General Probabilistic Frame-based Dialogue Systems",
"authors": [
{
"first": "T",
"middle": [
"H"
],
"last": "Bui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poel",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nijholt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zwiers",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc 5th Workshop on Knowledge and Reasoning in Practical Dialogue Systems",
"volume": "",
"issue": "",
"pages": "34--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TH Bui, M Poel, A Nijholt, and J Zwiers. 2007a. A tractable DDN-POMDP Approach to Affective Dia- logue Modeling for General Probabilistic Frame-based Dialogue Systems. In Proc 5th Workshop on Knowl- edge and Reasoning in Practical Dialogue Systems, pages 34-57.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Practical dialogue manager development using POMDPs",
"authors": [
{
"first": "T",
"middle": [
"H"
],
"last": "Bui",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Van Schooten",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hofs",
"suffix": ""
}
],
"year": 2007,
"venue": "8th SIGdial Workshop on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TH Bui, B van Schooten, and D Hofs. 2007b. Practi- cal dialogue manager development using POMDPs . In 8th SIGdial Workshop on Discourse and Dialogue, Antwerp.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Planning and Acting in Partially Observable Stochastic Domains",
"authors": [
{
"first": "",
"middle": [],
"last": "Lp Kaelbling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ml Littman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cassandra",
"suffix": ""
}
],
"year": 1998,
"venue": "Artificial Intelligence",
"volume": "101",
"issue": "",
"pages": "99--134",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LP Kaelbling, ML Littman, and AR Cassandra. 1998. Planning and Acting in Partially Observable Stochastic Domains. Artificial Intelligence, 101:99-134.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Spoken Dialogue Management Using Probabilistic Reasoning",
"authors": [
{
"first": "N",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thrun",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N Roy, J Pineau, and S Thrun. 2000. Spoken Dialogue Management Using Probabilistic Reasoning. In Proc ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Error Simulation for Training Statistical Dialogue Systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "ASRU 07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Schatzmann, B Thomson, and SJ Young. 2007. Error Simulation for Training Statistical Dialogue Systems. In ASRU 07, Kyoto, Japan.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Statistical User and Error Modelling for Spoken Dialogue Systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Schatzmann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Schatzmann. 2008. Statistical User and Error Mod- elling for Spoken Dialogue Systems. Ph.D. thesis, Uni- versity of Cambridge.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Training a real-world POMDP-based Dialog System",
"authors": [
{
"first": "B",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Weilhammer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "HLT/NAACL Workshop \"Bridging the Gap: Academic and Industrial Research in Dialog Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B Thomson, J Schatzmann, K Weilhammer, H Ye, and SJ Young. 2007. Training a real-world POMDP-based Dialog System. In HLT/NAACL Workshop \"Bridging the Gap: Academic and Industrial Research in Dialog Technologies\", Rochester.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Bayesian Update of Dialogue State for Robust Dialogue Systems",
"authors": [
{
"first": "B",
"middle": [],
"last": "Thomson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2008,
"venue": "Int Conf Acoustics Speech and Signal Processing ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B Thomson, J Schatzmann, and SJ Young. 2008. Bayesian Update of Dialogue State for Robust Dia- logue Systems. In Int Conf Acoustics Speech and Sig- nal Processing ICASSP, Las Vegas.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Partially Observable Markov Decision Processes for Spoken Dialog Systems",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Speech and Language",
"volume": "21",
"issue": "2",
"pages": "393--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "JD Williams and SJ Young. 2007a. Partially Observable Markov Decision Processes for Spoken Dialog Sys- tems. Computer Speech and Language, 21(2):393- 422.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Scaling POMDPs for Spoken Dialog Management",
"authors": [
{
"first": "J",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Audio, Speech and Language Processing",
"volume": "15",
"issue": "",
"pages": "2116--2129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "JD Williams and SJ Young. 2007b. Scaling POMDPs for Spoken Dialog Management. IEEE Audio, Speech and Language Processing, 15(7):2116-2129.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Hidden Information State Approach to Dialog Management",
"authors": [
{
"first": "",
"middle": [],
"last": "Sj Young",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schatzmann",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Weilhammer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ye",
"suffix": ""
}
],
"year": 2002,
"venue": "Int Conf Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "SJ Young, J Schatzmann, K Weilhammer, and H Ye. 2007. The Hidden Information State Approach to Dia- log Management. In ICASSP 2007, Honolulu, Hawaii. SJ Young. 2002. Talking to Machines (Statistically Speaking). In Int Conf Spoken Language Processing, Denver, Colorado.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spoken Dialogue Management as Planning and Acting under Uncertainty",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B Zhang, Q Cai, J Mao, E Chang, and B Guo. 2001. Spoken Dialogue Management as Planning and Acting under Uncertainty. In Eurospeech, Aalborg, Denmark.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Abstract view of a POMDP-based spoken dialogue system"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Overview of the HIS system dialogue cycle"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Average simulated dialogue success rate as a function of error rate for a hand-crafted (hdc), noise-free and incrementally trained (increm) policy."
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Average simulated dialogue reward as a function of error rate for a hand-crafted (hdc), noise-free and incrementally trained (increm) policy."
},
"TABREF3": {
"html": null,
"text": "Success rates and performance results on partial completion.",
"num": null,
"content": "<table><tr><td colspan=\"2\">Full Task Completion statistics</td><td/><td/></tr><tr><td>System</td><td colspan=\"3\">Succ. (std.dev) #turns Perf.</td></tr><tr><td>MDP-HDC</td><td>64.81 (4.96)</td><td>5.86</td><td>7.10</td></tr><tr><td>MDP-TRA</td><td>65.74 (4.93)</td><td>6.18</td><td>6.97</td></tr><tr><td>HIS-HDC</td><td>63.89 (4.99)</td><td>8.57</td><td>4.20</td></tr><tr><td>HIS-TRA</td><td>78.70 (4.25)</td><td>6.36</td><td>9.38</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Success rates and performance results on full completion.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "Subjective performance results from the user trial.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}