|
{ |
|
"paper_id": "2007", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:49:03.238207Z" |
|
}, |
|
"title": "Dialogue Policy Learning for combinations of Noise and User Simulation: transfer results", |
|
"authors": [ |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Edinburgh University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Xingkun", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Edinburgh University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Once a dialogue strategy has been learned for a particular set of conditions, we need to know how well it will perform when deployed in different conditions to those it was specifically trained for, i.e. how robust it is in transfer to different conditions. We first present novel learning results for different ASR noise models combined with different user simulations. We then show that policies trained in high-noise conditions perform significantly better than those trained for lownoise conditions, even when deployed in low-noise environments.", |
|
"pdf_parse": { |
|
"paper_id": "2007", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Once a dialogue strategy has been learned for a particular set of conditions, we need to know how well it will perform when deployed in different conditions to those it was specifically trained for, i.e. how robust it is in transfer to different conditions. We first present novel learning results for different ASR noise models combined with different user simulations. We then show that policies trained in high-noise conditions perform significantly better than those trained for lownoise conditions, even when deployed in low-noise environments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "For any dialogue system, a major development effort is in designing the dialogue policy of the system, that is, which dialogue actions (e.g. ask(destination city) or explict confirm) the system should perform. Machine-learning approaches to dialogue policies have been proposed by several authors, for example (Levin et al., 2000; Young, 2000; Henderson et al., 2005) . These approaches are very attractive because of their potential in efficient development and automatic optimization of dialogue systems.", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 330, |
|
"text": "(Levin et al., 2000;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 331, |
|
"end": 343, |
|
"text": "Young, 2000;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 367, |
|
"text": "Henderson et al., 2005)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We will address the issue of whether policies trained for one dialogue situation can be used successfully in other dialogue situations (Paek, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 147, |
|
"text": "(Paek, 2006)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For example, perhaps you have trained an optimal policy for an operating environment where the word-error rate (WER) is 5%, but you want to deploy this policy for a new application where you are not sure what the average WER is. So, you want to know how well the policy transfers between operating situations. Likewise, perhaps you have trained a policy on a data set of cooperative users, but you want to know how that policy will behave in contact with less co-operative users. So, you want to know how useful the policy is with different users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "These transfer issues are important because when deploying a real dialogue application we will not know these parameters exactly in advance, so we cannot train for the exact operating situation, but we want to be able to learn robust dialogue policies which are transferable to different noise/user/timepenalty situations, which we do not know about precisely before deployment.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The issue of policy transfer has been partially explored before as part of recent work on types of user simulations (Schatzmann et al., 2005) . Here, the authors explore how well policies trained on different types of user simulation perform when tested with others. They train and test on three approaches to user simulation: a bigram model (Eckert et al., 1997) , the Pietquin model (Pietquin, 2004) , and the Levin model (Levin et al., 2000) . They show that strategies learned with a \"poor\" user model can appear to perform well when tested with the same user model, but perform badly when tested on a \"better\" user model. However, the focus of (Schatzmann et al., 2005) is on the quality of the user simulation techniques themselves, rather than robustness of the learned dialogue policies. We will focus on one type of stochastic user simulation but different types of users and on different environmental conditions. (Frampton and Lemon, 2006) train a policy for 4-gram stochastic user simulation and test it on a 5gram simulation, and vice-versa, showing that the learned policy works well for the 2 different simulations. However, these simulations are trained on the same dataset (Walker et al., 2001 ) and thus do not simulate different types of user or noise conditions. Similarly (Henderson et al., 2005) test and train on different segments of the COMMUNICATOR data, so the results presented there do not deal with the issue of policy transfer. show that a single policy trained on a human-machine dialogue corpus also performs well with real users of a dialogue system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 141, |
|
"text": "(Schatzmann et al., 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 363, |
|
"text": "(Eckert et al., 1997)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 401, |
|
"text": "(Pietquin, 2004)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 424, |
|
"end": 444, |
|
"text": "(Levin et al., 2000)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 649, |
|
"end": 674, |
|
"text": "(Schatzmann et al., 2005)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 924, |
|
"end": 950, |
|
"text": "(Frampton and Lemon, 2006)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 1190, |
|
"end": 1210, |
|
"text": "(Walker et al., 2001", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1293, |
|
"end": 1317, |
|
"text": "(Henderson et al., 2005)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "We experiment with a 3-slot information-seeking system, resulting in 8 binary state variables (1 for whether each slot is filled, 1 for whether each slot is confirmed, 2 for whether the last user move was \"yes\" or \"no\"), resulting in 256 distinct dialogue states. There are 5 possible system actions (e.g. implicit-confirm, greet, present-info).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The experimental set-up", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use the SHARSHA Hierarchical Reinforcement Learning algorithm of REALL (Shapiro and Langley, 2002) to learn over the policy space for obtaining 3 information slots. For all combinations of Turn Penalty, noise, and user models we train each policy on 32,000 iterations (approx. 8000 dialogues). We then test each policy (including the hand-coded policies) over 1000 dialogues in the conditions for which they were trained. Statistical significance is measured by independent samples ttests, over 1000 test dialogues.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 101, |
|
"text": "(Shapiro and Langley, 2002)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The experimental set-up", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use the hierarchical structure of REALL (Shapiro and Langley, 2002) programs to encode commonsense constraints on the dialogue problem, while still leaving many options for learning. The hierarchical plans encode obvious decisions such as: \" never confirm already confirmed slots\".", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 70, |
|
"text": "(Shapiro and Langley, 2002)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The experimental set-up", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We use a reward function which incorporates noise modelling, as in (Rieser and Lemon, 2007) . For each dialogue we have, as is now commonly used:", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 91, |
|
"text": "(Rieser and Lemon, 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reward function", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "reward = completionValue -dialogueLength * TurnPenalty", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reward function", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "However, for our noise modelling, the completionValue of a dialogue is defined as the percentage probability that the user goal is in the actual result set that they are presented with. See (Rieser and Lemon, 2007) for full details. In our experiments Low Noise (LN) means that there is a 100% chance of confirmed slots being correct and an 80% chance of filled (but not confirmed) slots being correct. In a real application domain we will not know these probabilities exactly, but we want to be able to learn dialogue policies which are transferrable to different noise situations, which we do not know about precisely before deployment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 214, |
|
"text": "(Rieser and Lemon, 2007)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reward function", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We use 2 probabilistic user simulations: \"Cooperative\" (C) and \"Uncooperative\" (U). Each simulated user produces a response to the previous system dialogue move, with a particular probablility distribution conditioned on the previous system move. For example, if the system asks for slot1 (e.g. \"what type of food do you want?\") the cooperative user responds to this according to the a probability distribution over dialogue acts estimated from the COM-MUNICATOR corpus (Walker et al., 2001) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 470, |
|
"end": 491, |
|
"text": "(Walker et al., 2001)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simulated users", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In contrast, the \"Uncooperative\" user simply has a flat probability distribution over the all the possible dialogue acts: it is just as likely to be silent as it is to supply information. This is not intended to be a particularly realistic user simulation, but it provides us with behaviour that is useful as one end of a spectrum of possible behaviours.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Simulated users", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The hand-coded dialogue policies obey the same commonsense constraints as mentioned above but they also try to confirm all slots implicitly or explicitly (based on standard rules) and then close the dialogue, except for cases where particular dialogue length thresholds are surpassed. For example, if the current dialogue length is greater than 10 the handcoded policy will immediately provide information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Baseline hand-coded policies", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "In general, learning takes about 500 dialogues before a policy of confirming as many slots as possible in the shortest time is discovered. Early in the training runs the learner experiments with very short dialogues (smaller length penalties), but usually receives less completion reward for them and so learns how to conduct the dialogue so as to trade-off between turn penalties (TP) and completion value. For example, in the High Noise, Cooperative user, turn penalty 5 case, after a policy is discovered, testing the learned policy in the same situation (but with learning and exploration turned off), the average dialogue reward is 49.94 (see figure 1, plotting average reward every 50 test dialogues, and table 1). Contrast this now with the performance of the hand-coded policy in the same situation (high noise, cooperative user, TP=5), over 1000 test dialogues, also shown in figure 1. The average reward for the hand-coded policy is 36.43 in these conditions, which means that the learned policy provides a relative increase in average reward of 37% in this case. This result is significant at p < .01. Table 1 shows all results for the High Noise, Cooperative user case, for turn penalties (TP) ranging from 0 to 20. Here we can see that the learner is able to develop policies which are significantly better than the hand-coded policy. The exception is the TP=10 case, where the learned policy is not significantly better than the handcoded one (p = .25). For the significant results, the average relative increase in reward for learned policies is 28.4%", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1113, |
|
"end": 1120, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results versus hand-coded policies", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Considering the average dialogue lengths in each case, note that the hand-coded policy is able to complete the dialogues in, on average, fewer than 7 moves, which is less than the hand-coded length threshold (10).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results versus hand-coded policies", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The learned policies, on the other hand, are able to discover their own local length/completion value trade-offs, and we see that, as expected, average dialogue length decreases as Turn Penalty increases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results versus hand-coded policies", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Hand Similar results hold for the other combinations of Noise, User type, and Turn Penalty.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Learned Policy", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the following experiments we chose to investigate the representative TP=5 case. We thus have 2 degrees of variation: user type (Cooperative/Uncooperative, C/U), and noise conditions (High/Low, H/L). Testing all combinations of these learned policies, for 1000 dialogues each, we obtained the results shown in However, taking the same trained policy (C,L 1st column) and testing it with a Uncooperative user in High Noise conditions (row 4) results only in an average reward of 9.99. We would expect that the lead-ing diagonal of this table should contain the highest values (i.e. that the best policy for certain conditions is the one trained on those conditions), but surprisingly, this is not the case. For example, training a C,H policy and testing it for C,L gives better results than training for C,L (and testing for C,L). This is significant at p < .05. This shows that a C,H policy in fact transfers well to C, L conditions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Looking at the 4 policies C,L, C,H, U,L, and U,H we can see that C,H has the best transfer properties. Interestingly, C,H is the best policy for all of the testing conditions C,L, C,H, and U,H. But should we then train only in High noise conditions? Consider the following set of results (highlighted in bold font in This indeed shows that it is better to train in High noise conditions than low noise, no matter what conditions you deploy in. These results are all significant at p < .05 except for the case \"train C,H and test C,H > train C,L and test C,H\" (p = .37). This means that for cooperative users, training in High noise is as good as training in Low noise. These results show that, when training a policy for an operating environment for which you don't have much data (i.e. the developer does not yet know the noise and user characteristics) it is better to train and deploy a High noise policy, than to deploy a policy trained for Low noise conditions. Similar results show that policies trained on uncooperative users perform well when tested on cooperative users but not vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transfer results", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We addressed the robust of learned strategies in transfer to different conditions. We provided transfer results for dialogue policy learning and are the first to present results for different ASR noise models combined with different user models. We first showed that our learned policies for a range of environmental conditions (Noise, Users, Turn Penalties) significantly outperform hand-coded dialogue policies (e.g average 28% relative reward increase for cooperative users in high noise). We then compared different learned policies in terms of their transfer properties. We showed that policies trained in highnoise conditions perform significantly better than those trained for low-noise conditions, even when deployed in low-noise environments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgements This work is funded by the EPSRC (grant number EP/E019501/1) and by Scottish Enterprise under the Edinburgh-Stanford Link.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "User modelling for spoken dialogue system evaluation", |
|
"authors": [ |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Eckert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pieraccini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of ASRU", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "W. Eckert, E. Levin, and R. Pieraccini. 1997. User mod- elling for spoken dialogue system evaluation. In Pro- ceedings of ASRU, pages 80-87.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning more effective dialogue strategies using limited dialogue move features", |
|
"authors": [ |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Frampton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oliver", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew Frampton and Oliver Lemon. 2006. Learning more effective dialogue strategies using limited dia- logue move features. In Proceedings of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Hybrid Reinforcement/Supervised Learning for Dialogue Policies from COMMUNICATOR data", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Georgila", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IJCAI workshop on Dialogue Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Henderson, O. Lemon, and K. Georgila. 2005. Hy- brid Reinforcement/Supervised Learning for Dialogue Policies from COMMUNICATOR data. In IJCAI workshop on Dialogue Systems.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Evaluating Effectiveness and Portability of Reinforcement Learned Dialogue Strategies with real users: the TALK TownInfo Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Georgila", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Henderson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proc. ACL/IEEE SLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O. Lemon, K. Georgila, and J. Henderson. 2006. Evaluating Effectiveness and Portability of Reinforce- ment Learned Dialogue Strategies with real users: the TALK TownInfo Evaluation. In Proc. ACL/IEEE SLT.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "A stochastic model of human-machine interaction for learning dialog strategies", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Pieraccini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Eckert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "IEEE Transactions on Speech and Audio Processing", |
|
"volume": "8", |
|
"issue": "1", |
|
"pages": "11--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "E. Levin, R. Pieraccini, and W. Eckert. 2000. A stochas- tic model of human-machine interaction for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1):11-23.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deployment", |
|
"authors": [ |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Paek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Dialogue on Dialogues. Interspeech2006 -ICSLP Satellite Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tim Paek. 2006. Reinforcement learning for spoken di- alogue systems: Comparing strengths and weaknesses for practical deployment. In Dialogue on Dialogues. Interspeech2006 -ICSLP Satellite Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "A Framework for Unsupervised Learning of Dialogue Strategies", |
|
"authors": [ |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Pietquin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olivier Pietquin. 2004. A Framework for Unsupervised Learning of Dialogue Strategies. Presses Universi- taires de Louvain, SIMILAR Collection.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Learning dialogue strategies for interactive database search", |
|
"authors": [ |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Rieser", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "V. Rieser and O. Lemon. 2007. Learning dialogue strate- gies for interactive database search. In Interspeech.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Effects of the user model on simulation-based learning of dialogue strategies", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Schatzmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"N" |
|
], |
|
"last": "Stuttle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Weilhammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "IEEE ASRU Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Schatzmann, M. N. Stuttle, K. Weilhammer, and S. Young. 2005. Effects of the user model on simulation-based learning of dialogue strategies. In IEEE ASRU Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Separating skills from preference: using learning to program by reward", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Shapiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Langley", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Intl. Conf. on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Shapiro and P. Langley. 2002. Separating skills from preference: using learning to program by reward. In Intl. Conf. on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Quantitative and qualitative evaluation of DARPA Communicator spoken dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Passonneau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Boland", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Walker, R. Passonneau, and J.Boland. 2001. Quan- titative and qualitative evaluation of DARPA Commu- nicator spoken dialogue systems. In Proc. ACL.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Probabilistic methods in spoken dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "Steve", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Philosophical Transactions of the Royal Society (Series A)", |
|
"volume": "358", |
|
"issue": "", |
|
"pages": "1389--1402", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steve Young. 2000. Probabilistic methods in spoken dialogue systems. Philosophical Transactions of the Royal Society (Series A), 358(1769):1389-1402.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"text": "Testing: High noise, cooperative user, TP 5: Learned versus Hand-coded policy", |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">Training</td><td/></tr><tr><td>Testing</td><td>C,L</td><td>C,H</td><td>U,L</td><td>U,H</td></tr><tr><td>C,L</td><td colspan=\"4\">73.66 74.72 54.86 54.48</td></tr><tr><td>C,H</td><td colspan=\"4\">49.64 50.08 21.07 25.36</td></tr><tr><td>U,L</td><td colspan=\"4\">23.67 27.84 37.62 39.37</td></tr><tr><td>U,H</td><td colspan=\"4\">09.99 14.40 08.93 10.22</td></tr><tr><td colspan=\"5\">Average: 39.24 41.76 30.62 32.36</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Transfer results for learned policies Looking at table 2, we can see, for example, that training with a Cooperative user in Low noise (1st column) and testing with the same conditions (1st row) results in an average dialogue reward of 73.66.", |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "", |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>):</td></tr><tr><td>train C,H and test C,L > train C,L and test C,L</td></tr><tr><td>train C,H and test C,H > train C,L and test C,H</td></tr><tr><td>train U,H and test U,L > train U,L and test U,L</td></tr><tr><td>train U,H and test U,H > train U,L and test U,H</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |