Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W00-0304",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:34:45.915181Z"
},
"title": "NJFun: A Reinforcement Learning Spoken Dialogue System",
"authors": [
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs --Research",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": ""
},
{
"first": "Satinder",
"middle": [],
"last": "Singh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs --Research",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Kearns",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs --Research",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Marilyn",
"middle": [],
"last": "Walker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs --Research",
"location": {
"addrLine": "180 Park Avenue Florham Park",
"postCode": "07932",
"region": "NJ",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes NJFun, a real-time spoken dialogue systemthat-provides users with information about things to d~ in New Jersey. NJFun automatically optimizes its dialogue strategy over time, by using a methodology for applying reinforcement learning to a working dialogue system with human users.",
"pdf_parse": {
"paper_id": "W00-0304",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes NJFun, a real-time spoken dialogue systemthat-provides users with information about things to d~ in New Jersey. NJFun automatically optimizes its dialogue strategy over time, by using a methodology for applying reinforcement learning to a working dialogue system with human users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Using the formalism of Markov decision processes (MDPs) and the algorithms of reinforcement learning (RL) has become a standard approach to many AI problems that involve an agent learning to optimize reward by interaction with its environment (Sutton and Barto, 1998) . We have adapted the methods of RL to the problem of automatically learning a good dialogue strategy in a fielded spoken dialogue system. Here is a summary of our proposed methodology for developing and evaluating spoken dialogue systems using R.L:",
"cite_spans": [
{
"start": 243,
"end": 267,
"text": "(Sutton and Barto, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Choose an appropriate reward measure for dialogues, and an appropriate representation for dialogue states.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Build an initial state-based training system that creates an exploratory data set. Despite being exploratory, this system should provide the desired basic functionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Use these training dialogues to build an empirical MDP model on the state space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Compute the optimal dialogue policy according to this MDF, using RL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Reimplement the system using the learned dialogue policy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this demonstration session paper, we briefly describe our system, present some sample dialogues, and summarize our main contributions and limitations. Full details of our work (e.g. our reinforcement learning methodology, analysis establishing the veracity of the MDP we learn, a description of an experimental evaluation of NJFun, analysis of our learned dialogue strategy) can be found in two forthcoming technical papers .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The NJFun System NJFun is a reM-time spoken dialogue system that provides users with information about things to do in New Jersey. 1 An example dialogue with NJFun is shown in Figure 1 . NJFun is built using an internal platform for spoken dialogue systems. NJFun uses a speech recognizer with stochastic language models trained from example user utterances, and a TTS system based on concatenative diphone synthesis. Its database is populated from the nj. online webpage to contain information about activities. NJFun indexes this database using three attributes: activity type, location, and time of day.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "Informally, the NJFun dialogue manager sequentially queries the user regarding the activity, location and time attributes, respectively. NJFun first asks the user for the current attribute (and possibly the other attributes, depending on the initiative). If the current attribute's value is not obtained, NJFun asks for the attribute (and possibly the later attributes) again. If NJFun still does not obtain a value, N J-Fun moves on to the next attribute(s). Whenever NJFun successfully obtains a value, it can confirm the value, or move on and attempt to obtain the next attribute(s)? When NJFun has finished asking about the attributes, it queries the database (using a wildcard for each unobtained attribute value).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "We use reinforcement learning (RL) to optimize dialogue strategy, lq.L requires that all potential actions for each state be specified. Note that at some states it is easy for a human to make the correct action choice. We made obvious dialogue strategy choices in advance, and used learning only to optimize the difficult choices. In NJFun, we restricted the action choices to 1) the type of initiative to use 2Note that it is possible for users to specify multiple attributes, in any order, in a single utterance. However, NJFun will always process multiple attributes using its predefined sequential ordering. when asking or reasking for an attribute, and 2) whether to confirm an attribute value once obtained. The optimal actions may vary with dialogue state, and are subject to active debate in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "The examples in Figure 2 shows that NJFun can ask the user about the first 2 attributes 3 using three types of initiative, based on the combination of the wording of the system prompt (open versus directive), and the type of grammar NJFun uses during ASR (restrictive versus non-restrictive). If NJFun uses an open question with an unrestricted grammar, it is using user initiative (e.g., GreetU). If N J-Fun instead uses a directive prompt with a restricted grammar, the system is using system initiative (e.g.,",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "2",
"sec_num": null
},
{
"text": "). If NJFun uses a directive question with a non-restrictive grammar, it is using mixed initiative, because it is giving the user an opportunity to take the initiative by supplying extra information (e.g., ReAsklM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "NJFun can also vary the strategy used to confirm each attribute. If NJFun asks the user to explicitly verify an attribute, it is using explicit confirmation (e.g., ExpConf2 for the location, exemplified by $2 in Figure 1 ). If NJFun does not generate any confirmation prompt, it is using no confirmation (an action we call NoConf).",
"cite_spans": [],
"ref_spans": [
{
"start": 212,
"end": 220,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "Solely for the purposes of controlling its operation (as opposed to the learning, which we discuss in a moment), NJFun internally maintains an operations vector of 14 variables. 2 variables track whether the system has greeted the user, and which attribute the system is currently attempting to obtain. For each of the 3 attributes, 4 variables track whether '~ \"Greet\" is equivalent to asking for the first attribute. N J-Fun always uses system initiative for the third attribute, because at that point the user can only provide the time of day. the system has obtained the attribute's value, the system's confidence in the value (if obtained), the number of times the system has asked the user about the attribute, and the type of ASR grammar most recently used to ask for the attribute.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "The formal state space S maintained by NJFun for the purposes of learning is much simpler than the operations vector, due to data sparsity concerns. The dialogue state space $ contains only 7 variables, which are summarized in Figure 3 , and is easily computed from the operations vector. The \"greet\" variable tracks whether the system has greeted the user or not (no=0, yes=l). \"Attr\" specifies which attribute NJFun is currently attempting to obtain or verify (activity=l, location=2, time=3, done with attributes=4). \"Conf\" represents the confidence that NJFun has after obtaining a value for an attribute. The values 0, 1, and 2 represent low, medium and high ASR confidence. The values 3 and 4 are set when ASR hears \"yes\" or \"no\" after a confirmation question. \"Val\" tracks whether NJFun has obtained a value for the attribute (no=0, yes=l). \"Times\" tracks the number of times that NJFun has asked the user about the attribute. \"Gram\" tracks the type of grammar most recently used to obtain the attribute (0=non-restrictive, l=restrictive). Finally, \"history\" represents whether NJFun had trouble understanding the user in the earlier part of the conversation (bad=0, good=l). We omit the full definition, but as an example, when NJFun is working on the second attribute (location), the history variable is set to 0 if NJFun does not have an activity, has an activity but has no confidence in the value, or needed two queries to obtain the activity.",
"cite_spans": [],
"ref_spans": [
{
"start": 227,
"end": 235,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "In order to apply RL with a limited amount of training data, we need to design a small state space I greet attr conf val times gram history [ 0,1 1,2,3,4 0,1,2,3,4 0,1 0,1,2 0, that makes enough critical distinctions to support learning. The use of S yields a state space of size 62. The state space that we utilize here, although minimal, allows us to make initiative decisions based on the success of earlier exchanges, and confirmation decisions based on ASR confidence scores and grammars.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 189,
"text": "I greet attr conf val times gram history [ 0,1 1,2,3,4 0,1,2,3,4 0,1 0,1,2 0,",
"ref_id": null
}
],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "In order to learn a good dialogue strategy via RL we have to explore the state action space. The state/action mapping representing NJFun's initial exploratory dialog@ strategy EIC (Exploratory for Initiative and Confirmation) is given in Figure 4 . Only the exploratory portion of the strategy is shown, namely all those states for which NJFun has an action choice. For each such state, we list the two choices of actions available. (The action choices in boldface are the ones eventually identified as optimal by the learning process.) The EIC strategy chooses randomly between these two actions when in the indicated state, in order to maximize exploration and minimize data sparseness when constructing our model. Since there are 42 states with 2 choices each, there is a search space of 242 potential dialogue strategies; the goal of the RL is to identify an apparently optimal strategy from this large search space. Note that due to the randomization of the EIC strategy, the prompts are designed to ensure the coherence of all possible action sequences. Figure 5 illustrates how the dialogue strategy in Figure 4 generates the dialogue in Figure 1 . Each row indicates the state that NJFun is in, the action executed in this state, the corresponding turn in Figure 1 , and the reward received. The initial state represents that NJFun will first attempt to obtain attribute 1. NJFun executes GreetU (although as shown in Figure 4 , Greets is also possible), generating the first utterance in Figure 1 . After the user's response, the next state represents that N J-Fun has now greeted the user and obtained the activity value with high confidence, by using a nonrestrictive grammar. NJFun chooses not to confirm the activity, which causes the state to change but no prompt to be generated. The third state represents that NJFun is now working on the second attribute (location), that it already has this value with high confidence (location was obtained with activity after the user's first utterance), and that the dialogue history is good. This time NJFun chooses to confirm the attribute with the second NJFun utterance, and the state changes again. The processing of time is similar to that of location, which leads NJFun to the final state, where it performs the action \"Tell\" (cor- t g 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 2 I 0 0 0 1 1 2 1 0 1 0 1 1 4 0 0 1 3 0 1 0 0 1 3 0 1 0 0 1 3 0 1 0 1 1 3 0 1 0 1 1 3 1 1 0 0 1 3 1 1 0 0 1 3 1 1 0 1 i 3 1 I 0 i 1 3 2 1 0 0 1 3 2 1 0 0 1 3 2 1 0 1 i 3 2 1 0 ",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1060,
"end": 1068,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1110,
"end": 1118,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1145,
"end": 1153,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1264,
"end": 1272,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1426,
"end": 1434,
"text": "Figure 4",
"ref_id": null
},
{
"start": 1497,
"end": 1505,
"text": "Figure 1",
"ref_id": null
},
{
"start": 2293,
"end": 2480,
"text": "t g 0 1 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 1 0 0 0 1 1 1 1 0 1 0 1 1 2 I 0 0 0 1 1 2 1 0 1 0 1 1 4 0 0",
"ref_id": null
},
{
"start": 2481,
"end": 2694,
"text": "1 3 0 1 0 0 1 3 0 1 0 0 1 3 0 1 0 1 1 3 0 1 0 1 1 3 1 1 0 0 1 3 1 1 0 0 1 3 1 1 0 1 i 3 1 I 0 i 1 3 2 1 0 0 1 3 2 1 0 0 1 3 2 1 0 1 i 3 2 1 0",
"ref_id": null
}
],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "State C V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "I Action Choices GreetS,GreetU ReAsk 1 S,ReAsk 1 M NoConf, ExpConfl NoConf, ExpConfl NoConf,ExpConfl NoConf, ExpConfl NoConf, ExpConfl NoConf, ExpConfl ReAsklS,ReAsklM 0 ReAsklS,ReAsklM 0 -Ask2S,Ask2U 1 Ask2S,Ask2U 0 ReAsk2S,ReAsk2M 1 ReAsk2S,ReAsk2 M 0 NoConf, ExpConf2 1 NoConf, ExpConf2 0 NoConf, ExpConf2 1 NoConf, ExpConf2 0 NoConf, ExpConf2 1 NoConf,ExpConf2 0 NoConf, ExpConf2 1 NoConf, ExpConf2 0 NoConf, ExpConf2 1 NoConf, ExpConf2 0 NoConf, ExpConf2 1 NoConf, ExpConf2 0 ReAsk2S,ReAsk2M 1 ReAsk2S,ReAsk2M 0 ReAsk2S,R.eAsk2M 1 ReAsk2S,ReAskSM 0 NoConf, ExpConf3 1 NoConf, ExpConf3 0 NoConf, ExpConf3 I NoConf, ExpConf3 0 NoConf, ExpConf3 1 NoConf, ExpConf3 0 NoConf, ExpConf3 1 NoConf, ExpConf3 0 NoConf,ExpCon:f3 1 NoConf,ExpConf3 0 NoConf, ExpConf3 I NoConf, ExpConf3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "Figure 4: Exploratory portion of EIC strategy. Turn Reward gaevtgh 0100000 GreetU S1 0 I 121000 NoConf 0 1 2 2 1 0 0 1 ExpConf2 $2 0 1 3 2 1 0 0 1 ExpConf3 $3 0 1 4 0 0 0 0 0 Tell S4 1 Figure 5 : Generating the dialogue in Figure 1. responding to querying the database, presenting the results to the user, and asking the user to provide a reward). Note that in NJFun, the reward is always No. $11:",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 77,
"text": "Turn Reward gaevtgh 0100000",
"ref_id": null
},
{
"start": 188,
"end": 196,
"text": "Figure 5",
"ref_id": null
},
{
"start": 226,
"end": 235,
"text": "Figure 1.",
"ref_id": null
}
],
"eq_spans": [],
"section": "GreetS",
"sec_num": null
},
{
"text": "Thank~ou for using the system. Please give me feedback by saying 'good', 'so-so', or 'bad'. UII:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Action",
"sec_num": null
},
{
"text": "Bad'. _. is the same in both dialogues (\"1 1 2 1 0 0 0\"), the activity is not confirmed in the first dialogue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Action",
"sec_num": null
},
{
"text": "The main contribution of this work is that we have developed and empirically validated a practical methodology for using RL to build a real dialogue system that optimizes its behavior from dialogue data. Unlike traditional approaches to learning dialogue strategy from data, which are limited to searching a handful of policies, our RL approach is able to search many tens of thousands of dialogue strategies. In particular, the traditional approach is to pick a handful of strategies that experts intuitively feel are good, implement each policy as a separate system, collect data from representative human users for each system, and then use standard statistical tests on that data to pick the best system, e.g. (Danieli and Gerbino, 1995) . In contrast, our use of RL allowed us to explore 242 strategies that were left in our search space after we excluded strategies that were clearly suboptimal. An empirical validation of our approach is detailed in two forthcoming technical papers . We obtained 311 dialogues with the exploratory (i.e., training) version of NJFun, constructed an MDP from this training data, used RL to compute the optimal dialogue strategy in this MDP, reimplemented NJFun such that it used this learned dialogue strategy, and obtained 124 more dialogues. Our main result was that task completion improved from 52% to 64% from training to test data. Furthermore, analysis of our MDP showed that the learned strategy was not only better than EIC, but also better than other fixed choices proposed in the literature .",
"cite_spans": [
{
"start": 714,
"end": 741,
"text": "(Danieli and Gerbino, 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Contributions",
"sec_num": "4"
},
{
"text": "The main limitation of this effort to automate the design of a good dialogue strategy is that our current framework has nothing to say about how to choose the reward measure, or how to best represent dialogue state. In NJFun we carefully but manually designed the state space of the dialogue. In the future, we hope to develop a learning methodology to automate the choice of state space for dialogue systems. With respect to the reward function, our empirical evaluation investigated the impact of using a number of reward measures (e.g., user feedback such as U4 in Figure 1 , task completion rate, ASR accuracy), and found that some rewards worked better than others. We would like to better understand these differences among the reward measures, investigate the use of a learned reward function, and explore the use of non-terminal rewards.",
"cite_spans": [],
"ref_spans": [
{
"start": 568,
"end": 576,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Limitations",
"sec_num": "5"
},
{
"text": "except at the terminal state, as shown in the last column ofFigure 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Metrics for evaluating dialogue strategies in a spoken language system",
"authors": [
{
"first": "M",
"middle": [],
"last": "Danieli",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gerbino",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 1995 AAA1 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation",
"volume": "",
"issue": "",
"pages": "34--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Danieli and E. Gerbino. 1995. Metrics for eval- uating dialogue strategies in a spoken language system. In Proceedings of the 1995 AAA1 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, pages 34-39.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic optimization of dialogue management",
"authors": [
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kearns",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Litman, M. Kearns, S. Singh, and M. Walker. 2000. Automatic optimization of dialogue man- agement. Manuscript submitted for publication.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Empirical evaluation of a reinforcement learning spoken dialogue system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Kearns",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Singh, M. Kearns, D. Litman, and M. Walker. 2000. Empirical evaluation of a reinforcement learning spoken dialogue system. In Proceedings of AAAI 2000.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Reinforcement Learning: An Introduction",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Barto",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. S. Sutton and A. G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"text": "Sample initiative strategy choices.",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "State features and values.",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "present two dialogues that were generated using different dialogue strategies in the EIC search space. For example, although NJFun begins both dialogues in the first state inFigure 4, NJFun executes GreetU in one dialogue and GreetS in the other. Similarly, although the next state of NJFun",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "An example training dialogue with NJFun.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td>SI:</td><td>Welcome to NJFun. How may i help you?</td><td>._</td></tr><tr><td/><td>UI:</td><td>I'd like to find um winetasting in Lambertville in the morning.</td></tr><tr><td/><td/><td colspan=\"2\">(ASR output: I'd like to find out wineries the in the Lambertville in the morning.)</td></tr><tr><td/><td>$2:</td><td>Did you say you are interested in Lambertville?</td></tr><tr><td/><td>U2:</td><td>Yes.</td></tr><tr><td/><td>$3:</td><td>Did you say you want to go in the morning?</td></tr><tr><td/><td>U3:</td><td>Yes.</td></tr><tr><td/><td>$4:</td><td>I found a winery near Lambertville that is open in the morning. It is [... ]</td></tr><tr><td/><td/><td>Please give me feedback by saying 'good', 'so-so', or 'bad'.</td></tr><tr><td/><td>U4:</td><td>Good.</td></tr><tr><td/><td colspan=\"3\">Figure 1: An example dialogue with NJFun (after optimization via RL).</td></tr><tr><td>Action</td><td>m</td><td/></tr><tr><td>GreetS</td><td colspan=\"3\">Welcome to NJFun. Please say an activity name or say 'list activities' for a list of activities I know about.</td></tr><tr><td>GreetU</td><td colspan=\"2\">Welcome to NJFun. How may I help you?</td></tr><tr><td>ReAsk 1 S</td><td colspan=\"3\">I know about amusement parks, aquariums, cruises, historic sites, museums, parks, theaters, wineries,</td></tr><tr><td/><td colspan=\"2\">and zoos.</td></tr><tr><td>ReAsk 1 M</td><td/><td/></tr><tr><td>Ask2S</td><td/><td/></tr><tr><td>Ask2U</td><td/><td/></tr><tr><td>ReAsk2S</td><td/><td/></tr><tr><td>ReAsk2~</td><td/><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Please say an activity name from this list. Please tell me the activity type.You can also tell me the location and time. Please say the name of the town or city that you are interested in. Please give me more information. Please tell me the name of the town or city that you are interested in. \"Please tell me the location that you are interested in. You can also tell me the time."
},
"TABREF1": {
"content": "<table><tr><td>SI:</td><td>Welcome to NJFun. Please say an activity name or say 'list activities' for a list of activities I know about.</td></tr><tr><td>UI:</td><td>I would like to visit a historic site in Stanhope this morning.</td></tr><tr><td/><td>(ASR output: I would like to the zoo historic sites in stanhope historic)</td></tr><tr><td>$2:</td><td>Did you say you are interested in going to a zoo?</td></tr><tr><td>U2:</td><td>No.</td></tr><tr><td colspan=\"2\">$3: I know about $4: Did you say you are interested in going to a historic site?</td></tr><tr><td>U4:</td><td>Yes.</td></tr><tr><td>$5:</td><td>Please give me more information.</td></tr><tr><td>U5:</td><td>(user says nothing)</td></tr><tr><td>S6:</td><td>Please tell me the location that you are interested in. You can also tell me the time.</td></tr><tr><td>U6:</td><td>Visit a historic. (ASR output: What historic)</td></tr><tr><td>$7:</td><td>What time of the day do you want to go?</td></tr><tr><td>U7:</td><td>Stanhope. (no ASR output)</td></tr><tr><td>$8:</td><td>Do you want to go in the morning, in the afternoon, or in the evening?</td></tr><tr><td>US:</td><td>Morning.</td></tr><tr><td>S9:</td><td>Did you say you want to go in the morning?</td></tr><tr><td>U9:</td><td>Yes.</td></tr><tr><td>Sl0:</td><td>I found 11 historic sites in New Jersey that are open in the morning. The first 3 [... ] Would you like to hear more?</td></tr><tr><td>UiO:</td><td/></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "amusement parks, aquariums, cruises, historic sites, museums, parks, theaters, wineries, and zoos. Please say an activity name from this list. U3:I would like to visit a historic site. (ASR output: I would like to visit historic sites)"
}
}
}
}