Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W08-0118",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:39:53.784663Z"
},
"title": "Optimal Dialog in Consumer-Rating Systems using a POMDP Framework",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21218",
"region": "MD",
"country": "USA"
}
},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {
"addrLine": "1 Microsoft Way",
"postCode": "98052",
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Corporation",
"location": {
"addrLine": "1 Microsoft Way",
"postCode": "98052",
"settlement": "Redmond",
"region": "WA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",
"pdf_parse": {
"paper_id": "W08-0118",
"_pdf_hash": "",
"abstract": [
{
"text": "Voice-Rate is an experimental dialog system through which a user can call to get product information. In this paper, we describe an optimal dialog management algorithm for Voice-Rate. Our algorithm uses a POMDP framework, which is probabilistic and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Simulation results show that the POMDP system performs significantly better than a deterministic baseline system in terms of both dialog failure rate and dialog interaction time. To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, web-based shopping and rating systems have provided a valuable service to consumers by allowing them to shop products and share their assessments of products online. The use of these systems, however, requires access to a web interface, typically through a laptop or desktop computer, and this restricts their usefulness. While mobile phones also provide some web access, their small screens make them inconvenient to use. Therefore, there arises great interests in having a spoken dialog interface through which a user can call to get product information (e.g., price, rating, review, etc.) on the fly. Voice-Rate (Zweig et al., 2007) is such a system. Here is a typical scenario under which shows the usefulness of the Voice-Rate system. A user enters a store and finds that a digital camera he has not planned to buy is on sale. Before he decides to buy the camera, he takes out his cell phone and calls Voice-Rate to see whether the price is really a bargain and what other people have said about the camera. This helps him to make a wise decision. The Voice-Rate system (Zweig et al., 2007) involves many techniques, e.g., information retrieval, review summarization, speech recognition, speech synthesis, dialog management, etc. In this paper, we mainly focus on the dialog management component.",
"cite_spans": [
{
"start": 632,
"end": 652,
"text": "(Zweig et al., 2007)",
"ref_id": "BIBREF6"
},
{
"start": 1092,
"end": 1112,
"text": "(Zweig et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When a user calls Voice-Rate for the information of a specific product, the system needs to identify, from a database containing millions of products, the exact product the user intends. To achieve this, the system first solicits the user for the product name. Using the product name as a query, the system then retrieves from its database a list of products related to the query. Ideally, the highest-ranked product should be the one intended by the user. In reality, this is often not the case due to various reasons. For example, there might be a speech recognition error or an information retrieval ranking error. Moreover, the product name is usually very ambiguous in identifying an exact product. The product name that the user says may not be exactly the same as the name in the product database. For example, while the user says \"Canon Powershot SD750\", the exact name in the product database may be \"Canon Powershot SD750 Digital Camera\". Even the user says the exact name, it is possible that the same name may be corresponding to different products in different categories, for instance books and movies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to the above reasons, whenever the Voice-Rate system finds multiple products matching the user's initial speech query, it initiates a dialog procedure to identify the intended product by asking questions about the products. In the product database, many attributes can be used to identify a product. For example, a digital camera has the product name, category, brand, resolution, zoom, etc. Given a list of products, different attributes may have different ability to distinguish the products. For example, if the products belong to many categories, the category attribute is very useful to distinguish the products. In contrast, if all the products belong to a single category, it makes no sense to ask a question on the category. In addition to the variability in distinguishing products, different attributes may require different knowledge from the user in order for them to answer questions about these attributes. For example, while most users can easily answer a question on category, they may not be able to answer a question on the part number of a product, though the part number is unique and perfect to distinguish products. Other variabilities are in the difficulty that the attributes impose on speech recognition and speech synthesis. Clearly, given a list of products and a set of attributes, what questions and in what order to ask is essential to make the dialog successful. Our goal is to dynamically find such important attributes at each stage/turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The baseline system (Zweig et al., 2007) asks questions only on product name and category. The order of questions is fixed: first ask questions on product category, and then on name. Moreover, it is deterministic and does not model uncertainly in speech recognition and user knowledge. Partially observable Markov decision process (POMDP) has been shown to be a general framework to capture the uncertainty in spoken dialog systems. In this paper, we present a POMDP-based probabilistic system, which utilizes rich product information and captures uncertainty in speech recognition and user knowledge. We propose a novel method to learn a user knowledge model from a review database. Our simulation results show that the POMDP-based system improves the baseline significantly.",
"cite_spans": [
{
"start": 20,
"end": 40,
"text": "(Zweig et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, our work is the first to show that a POMDP can be successfully used for disambiguation in a complex voice search domain like Voice-Rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Voice-Rate Dialog System Overview Figure 1 shows the main flow in the Voice-Rate system with simplification. Specifically, when a user calls Voice-Rate for the information of a specific Step-1: remove products that do not match the user action Step-2: any category question to ask?",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "yes: ask the question and return no: go to step-3 Step-3: ask a product name question Table 1 : Baseline Dialog Manager Algorithm product, the system first solicits the user for the product name. Treating the user input as a query and the product names in the product database as documents, the system retrieves a list of products that match the user input based on TF-IDF measure. Then, the dialog manager dynamically generates questions to identify the specific intended product. Once the product is found, the system plays back its rating information. In this paper, we mainly focus on the dialog manager component.",
"cite_spans": [],
"ref_spans": [
{
"start": 86,
"end": 93,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Baseline Dialog Manager: Table 1 shows the baseline dialog manager. In Step-1, it removes all the products that are not consistent with the user response. For example, if the user answers \"camera\" when given a question on category, the system removes all the products that do not belong to category \"camera\". In Step-2 and Step-3, the baseline system asks questions about product name and product category, and product category has a higher priority.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Overview of POMDP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A Partially Observable Markov Decision Process (POMDP) is a general framework to handle uncertainty in a spoken dialog system. Following nota-tions in Williams and Young (2007) , a POMDP is defined as a tuple {S, A, T, R, O, Z, \u03bb, b 0 } where S is a set of states s describing the environment; A is a set of machine actions a operating on the environment; T defines a transition probability P (s |s, a); R defines a reward function r(s, a); O is a set of observations o, and an observation can be thought as a corrupted version of a user action; Z defines an observation probability P (o |s , a); \u03bb is a geometric discount factor; and b 0 is an initial belief vector.",
"cite_spans": [
{
"start": 151,
"end": 176,
"text": "Williams and Young (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Definitions",
"sec_num": "3.1"
},
{
"text": "The POMDP operates as follows. At each timestep (a.k.a. stage), the environment is in some unobserved state s. Since s is not known exactly, a distribution (called a belief vector b) over possible states is maintained where b(s) indicates the probability of being in a particular state s. Based on the current belief vector b, an optimal action selection algorithm selects a machine action a, receives a reward r, and the environment transits to a new unobserved state s . The environment then generates an observation o (i.e., a user action), after which the system update the belief vector b. We call the process of adjusting the belief vector b at each stage \"belief update\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Definitions",
"sec_num": "3.1"
},
{
"text": "As mentioned in Williams and Young (2007) , it is not trivial to apply the POMDP framework to a specific application. To achieve this, one normally needs to design the following three components:",
"cite_spans": [
{
"start": 16,
"end": 41,
"text": "Williams and Young (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "\u2022 State Diagram Modeling \u2022 Belief Update \u2022 Optimal Action Selection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "The state diagram defines the topology of the graph, which contains three kinds of elements: system state, machine action, and user action. To drive the transitions, one also needs to define a set of models (e.g., user goal model, user action model, etc.). The modeling assumptions are applicationdependent. The state diagram, together with the models, determines the dynamics of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "In general, the belief update depends on the observation probability and the transition probability, while the transition probability itself depends on the modeling assumptions the system makes. Thus, the exact belief update formula is application-specific.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "Optimal action selection is essentially an optimization algorithm, which can be defined as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a * = arg max a\u2208A G(P (a)),",
"eq_num": "(1)"
}
],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "where A refers to a set of machine actions a. Clearly, the optimal action selection requires three sub-components: a goodness measure function G, a prediction algorithm P , and a search algorithm (i.e., the argmax operator). The prediction algorithm is used to predict the behavior of the system in the future if a given machine action a was taken. The search algorithm can use an exhaustive linear search or an approximated greedy search depending on the size of A (Murphy, 2000; Spaan and Vlassis, 2005) .",
"cite_spans": [
{
"start": 466,
"end": 480,
"text": "(Murphy, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 481,
"end": 505,
"text": "Spaan and Vlassis, 2005)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Applying POMDP in Practice",
"sec_num": "3.2"
},
{
"text": "In this section, we present our instantiation of POMDP in the Voice-Rate system. Table 2 summarizes the main design choices in the state diagram for our application, i.e., identifying the intended product from a large list of products.",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "POMDP Framework in Voice-Rate",
"sec_num": "4"
},
{
"text": "As in Williams and Young (2007) , we incorporate both the user goal (i.e., the intended product) and the user action in the system state. Moreover, to efficiently update belief vector and compute optimal action, the state space is dynamically generated and pruned. In particular, instead of listing all the possible combinations between the products and the user actions, at each stage, we only generate states containing the products and the user actions that are relevant to the last machine action. Moreover, at each stage, if the belief probability of a product is smaller than a threshold, we prune out this product and all its associated system states. Note that the intended product may be pruned away due to an overly large threshold. In the simulation, we will use a development set to tune this threshold.",
"cite_spans": [
{
"start": 6,
"end": 31,
"text": "Williams and Young (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "State Diagram Design",
"sec_num": "4.1.1"
},
{
"text": "As shown in Table 2 , five kinds of machine actions are defined. The questions on product names are usually long, imposing difficulty in speech synthesis/recgonition and user input. Thus, short questions (e.g., questions on category or simple attributes) are preferable. This partly motivate us to exploit rich product information to help the dialog.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "State Diagram Design",
"sec_num": "4.1.1"
},
{
"text": "Seven kinds of user actions are defined as shown in Table 2 . Among them, the user actions \"others\", \"not related\", and \"not known\" are special. Specifically, to limit the question length and to ensure the human is able to memorize all the options, we restrict the number of options in a single question to a threshold N (e.g., 5). Clearly, given a list of products and a question, there might be more than N possible options. In such a case, we need to merge some options into the \"others\" class. The third example in Table 2 shows an example with the \"others\" option. One may exploit a clustering algorithm (e.g., an iterative greedy search algorithm) to find an optimal merge. In our system, we simply take the top-(N -1) options (ranked by the belief probabilities) and treat all the remaining options as \"others\".",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 519,
"end": 526,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "State Diagram Design",
"sec_num": "4.1.1"
},
{
"text": "The \"not related\" option is required when some candidate products are irrelevant to the question. For example, when the system asks a question regarding the attribute \"cpu speed\" while the products contain both books and computers, the \"not related\" option is required in case the intended product is a book.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Diagram Design",
"sec_num": "4.1.1"
},
{
"text": "Lastly, while some attributes are very useful to distinguish the products, a user may not have enough knowledge to answer a question on these attributes. For example, while there is a unique part number for each product, however, the user may not know the exact part number for the intended product. Thus, \"not known\" option is required whenever the system expects the user is unable to answer the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "State Diagram Design",
"sec_num": "4.1.1"
},
{
"text": "We assume that the user does not change his goal (i.e., the intended product) along the dialog. We also assume that the user rationally answers the question to achieve his goal. Additionally, we assume that the speech synthesis is good enough such that the user always gets the right information that the system intends to convey. The two main models that we consider include an observation model that captures speech recognition uncertainty, and a user knowledge model that captures the variability of user knowledge required for answering questions on different attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "Observation Model: Since the speech recognition engine we are using returns only a one-best and its confidence value C \u2208 [0, 1]. We define the observation function as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P ( a u |a u ) = C if a u = a u , 1\u2212C |A u |\u22121 otherwise.",
"eq_num": "(2)"
}
],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "where a u is the true user action, a u is the speech recognition output (i.e., corrupted user action), and A u is the set of user actions related to the last machine action. User Knowledge Model: In most of the applications (Roy et al., 2000; Williams, 2007) where the POMDP framework got applied, it is normally assumed that the user needs only common sense to answer the questions asked by the dialog system. Our application is more complex as the product information is very rich. A user may have different difficulty in answering different questions. For example, while a user can easily answer a question on category, he may not be able to answer a question on the part number. Thus, we define a user knowledge model to capture such uncertainty. Specifically, given a question (say a m ) and an intended product (say g u ) in the user's mind, we want to know how likely the user has required knowledge to answer the question. Formally, the user knowledge model is,",
"cite_spans": [
{
"start": 224,
"end": 242,
"text": "(Roy et al., 2000;",
"ref_id": "BIBREF2"
},
{
"start": 243,
"end": 258,
"text": "Williams, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "P (a u |g u , a m ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 P (unk|g u , a m ) if a u =unk, 1 \u2212 P (unk|g u , a m ) if a u =truth, 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "(3) where unk represents the user action \"not known\". Clearly, given a specific product g u and a specific question a m , there is exactly one correct user action (represented by truth in Equation 3), and its probability is 1 \u2212 P (unk|g u , a m ). Now, to obtain a user knowledge model, we only need to obtain P (unk|g u , a m ). As shown in Table 2 , there are four kinds of question-type machine actions a m . We assume that the user always has knowledge to answer a question regarding the category and product name, and thus P (unk|g u , a m ) for these types of machine actions are zero regardless of what the specific product g u is. Therefore, we only need to consider P (unk|g u , a m ) when a m is a question about an attribute (say attr). Moreover, since there are millions of products, to deal with the data sparsity issue, we assume P (unk|g u , a m ) does not depends on a specific product g u , instead it depends on only the category (say cat) of the product g u . Therefore,",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 349,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (unk|g u , a m ) \u2248 P (unk|cat,attr).",
"eq_num": "(4)"
}
],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "Now, we only need to get the probability P (unk|cat,attr) for each attribute attr in each category cat. To learn P (unk|cat,attr), one may collect data from human, which is very expensive. Instead, we learn this model from a database of online reviews for the products. Our method is based on the following intuition: if a user cares/knows about an attribute of a product, he will mention either the attribute name, or the attribute value, or both in his review of this product. With this intuition, the occurrence frequency of a given attr in a given category cat is collected from the review database, followed by proper weighting, scaling and normalization, and thus P (unk|cat,attr) is obtained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1.2"
},
{
"text": "Based on the model assumptions in Section 4.1.2, the belief update formula for the state (g u , a u ) is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief Update",
"sec_num": "4.2"
},
{
"text": "b(g u , a u ) = (5) k \u00d7 P ( a u |a u )P (a u |g u , a m ) au\u2208A(gu) b(g u , a u )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief Update",
"sec_num": "4.2"
},
{
"text": "where k is a normalization constant. The P ( a u |a u ) is the observation function as defined in Equation 2, while P (a u |g u , a m ) is the user knowledge model as defined in Equation 3. The A(g u ) represents the set of user actions a u related to the system states for which the intended product is g u . In our state representation, a single product g u is associated with several states which differ in the user action a u , and the belief probability of g u is the sum of the probabilities of these states. Therefore, even there is a speech recognition error or an unintentional user mistake, the true product still gets a non-zero belief probability (though the true/ideal user action a u gets a zero probability). Moreover, the probability of the true product will get promoted through later iterations. Therefore, our system has error-handling capability, which is one of the major advantages over the deterministic baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Belief Update",
"sec_num": "4.2"
},
{
"text": "As mentioned in Section 3.2, the optimal action selection involves three sub-components: a prediction algorithm, a goodness measure, and a search algorithm. Ideally, in our application, we should minimize the time required to successfully identify the intended product. Clearly, this is too difficult as it needs to predict the infinite future and needs to encode the time into a reward function. Therefore, for simplicity, we predict only one-step forward, and use the entropy as a goodness measure 1 . Formally, the optimization function is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimal Action Selection",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a * = arg min a\u2208A H(Products | a),",
"eq_num": "(6)"
}
],
"section": "Optimal Action Selection",
"sec_num": "4.3"
},
{
"text": "where H(Products | a) is the entropy over the belief probabilities of the products if the machine action a was taken. When predicting the belief vector using Equation 5, we consider only the user knowledge model and ignore the observation function 2 . In the above, we consider only the question-type machine actions. We also need to decide when to take the play rating action such that the dialog will terminate. Specifically, we take the play rating action whenever the belief probability of the most probable product is greater than a threshold. Moreover, the threshold should depend on the number of surviving products. For example, if there are fifty surviving products and the most probable product has a belief probability greater than 0.3, it is reasonable to take the play rating action. This is not true if there are only four surviving products. Also note that if we set the thresholds to too small values, the system may play the rating for a wrong product. We will use a development set to tune these thresholds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Optimal Action Selection",
"sec_num": "4.3"
},
{
"text": "We use an exhaustive linear search for the operator argmin in Equation 6. However, additional filtering during the search is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Action Filtering during Search",
"sec_num": "4.3.1"
},
{
"text": "Repeated Question: Since the speech response from the user to a question is probabilistic, it is quite possible that the system will choose the same question that has been asked in previous stages 3 . Since our product information is very rich, many different questions have the similar capability to reduce entropy. Therefore, during the search, we simply ignore all the questions asked in previous stages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Action Filtering during Search",
"sec_num": "4.3.1"
},
{
"text": "\"Not Related\" Option: While reducing entropy helps to reduce the confusion at the machine side, it does not measure the \"weirdness\" of a question to the human. For example, when the intended product is a book and the candidate products contain both books and computers, it is quite possible that the optimal action, based solely on entropy reduction, is a question on the attribute \"cpu speed\". Clearly, such a question is very weird to the human as he is looking for a book that has nothing related to \"cpu speed\". Though the user may be able to choose the \"not related\" option correctly after thinking for a while, it degrades the dialog quality. Therefore, for a given question, whenever the system predicts that the user will have to choose the \"not related\" option with a probability greater than a threshold, we simply ignore such questions in the search. Clearly, if we set the threshold as zero, we essentially eliminates the \"not related\" option. That is, at each stage, we generate questions only on attributes that apply to all the candidate products. Since we dynamically remove products whose probability is smaller than a threshold at each stage, the valid question set dynamically expands. Specifically, at the beginning, only very general questions (e.g., questions on category) are valid, then more refined questions become valid (e.g., questions on product brand), and finally very specific questions are valid (e.g, questions on product model). This leads to very natural behavior in identifying a product, i.e., coarse to fine 4 . It also makes the system adapt to the user knowledge. Specifically, as the user demonstrates deeper knowledge of the products by answering the questions correctly, it makes sense to ask more refined questions about the products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Action Filtering during Search",
"sec_num": "4.3.1"
},
{
"text": "To evaluate system performance, ideally one should ask people to call the system, and manually collect the performance data. This is very expensive. Alternatively, we develop a simulation method, which is automatic and thus allow fast evaluation of the system during development 5 . In fact, many design choices in Section 4 are inspired by the simulation. Figure 2 illustrates the general framework for the simulation. The process is very similar to that in Figure 1 except Figure 2 : Flow Chart in Simulation recognizer are replaced with a simulated component, and that the simulated user has access to a user knowledge model. In particular, we generate the user action and its corrupted version using random number generators by following the models defined in Equations 3 and 2, respectively. We use a fixed value (e.g., 0.9) for C in Equation 2. Clearly, our goal here is not to evaluate the goodness of the user knowledge model or the speech recognizer. Instead, we want to see how the probabilistic dialog manger (i.e., POMDP) performs compared with the deterministic baseline dialog manager, and to see whether the richer attribute information helps to reduce the dialog interaction time.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 2",
"ref_id": null
},
{
"start": 459,
"end": 474,
"text": "Figure 1 except",
"ref_id": "FIGREF0"
},
{
"start": 475,
"end": 483,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simulation Results",
"sec_num": "5"
},
{
"text": "In the system, we use three data resources: a product database, a review database, and a query-click database. The product database contains detailed information for 0.2 million electronics and computer related products. The review database is used for learning the user knowledge model. The queryclick database contains 2289 pairs in the format (text query, product clicked). One example pair is (Canon Powershot A700, Canon Powershot A700 6.2MP digital camera). We divide it into a development set (1308 pairs) and a test set (981 pairs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Resources",
"sec_num": "5.2"
},
{
"text": "For each initial query, the information retrieval (IR) engine returns a list of top-ranked products. Whether the intended product is in the returned list depends on the size of the list. If the intended product is in the list, the IR successfully recalled the product. Table 3 shows the correlation between the recall rate and the size of the returned list. Clearly, the larger the list size is, the larger the recall rate is. One may notice that the IR recall rate is low. This is because the query-click data set is very noisy, that is, the clicked product may be nothing to do with the query. For example, (msn shopping, Handspring Treo 270) is one of the pairs in our data set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 276,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results on Information Retrieval",
"sec_num": "5.3"
},
{
"text": "As mentioned in Section 4, several parameters in the system are configurable and tunable. Specifically, we set the max number of options in a question as 5, and the threshold for \"not related\" option as zero. We use the development set to tune the following parameters: the threshold of the belief probability below which the product is pruned, and the thresholds above which the most probable product is played. The parameters are tuned in a way such that no dialog error is made on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog System Configuration and Tuning",
"sec_num": "5.4"
},
{
"text": "Even the IR succeeds, the dialog system may not find the intended product successfully. In particular, the baseline system does not have error handling capability. Whenever the system makes a speech recognition error or the user mistakenly answers a question, the dialog system fails (either plays the rating for a wrong product or fails to find any product). On the contrary, our POMDP framework has error handling functionality due to its probabilistic nature. Table 5 compares the dialog error rate between the baseline and the POMDP systems. Clearly, the POMDP system performs much better to handle errors. Note that the POMDP system does not eliminate dialog failures on the test set because the thresholds are not perfect for the test set 6 . This is due to two reasons: the system may prune the intended product (reason-1), and the system may play the rating for a wrong product (reason-2). ",
"cite_spans": [],
"ref_spans": [
{
"start": 463,
"end": 470,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results on Error Handling",
"sec_num": "5.5"
},
{
"text": "It is quite difficult to measure the exact interaction time, so instead we measure it through the number of stages/characters/words required during the dialog process. Clearly, the number of characters is the one that matches most closely to the true time. Table 4 reports the average and maximum numbers. In general, the POMDP system performs much better than the baseline system. One may notice the difference in the number of stages between the baseline and the POMDP systems is not as significant as in the number of characters. This is because the POMDP system is able to exploit very short questions while the baseline system mainly uses the product name question, which is normally very long. The long question on product name also imposes difficulty in speech synthesis, user input, and speech recognition, though this is not reflected in the simulation.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 264,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Results on Interaction Time",
"sec_num": "5.6"
},
{
"text": "In this paper, we have applied the POMDP framework into Voice-Rate, a system through which a user can call to get product information (e.g., price, rating, review, etc.). We have proposed a novel method to learn a user knowledge model from a review database. Compared with a deterministic baseline system (Zweig et al., 2007) , the POMDP system is probabilistic and is able to handle speech recognition errors and user mistakes, in which case the de-terministic baseline system is doomed to fail. Moreover, the POMDP system exploits richer product information to reduce the interaction time required to complete a dialog. We have developed a simulation model, and shown that the POMDP system improves the baseline system significantly in terms of both dialog failure rate and dialog interaction time. We also implement our POMDP system into a speech demo and plan to carry out tests through humans.",
"cite_spans": [
{
"start": 305,
"end": 325,
"text": "(Zweig et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Due to this approximation, one may argue that our model is more like the greedy information theoretic model inPaek and Chickering (2005), instead of a POMDP model. However, we believe that our model follows the POMDP modeling framework in general, though it does not involve reinforcement learning currently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we ignore the observation function only in the prediction, not in real belief update.3 In a regular decision tree, the answer to a question is deterministic. It never asks the same question as that does not lead to any additional reduction of entropy. This problem is also due to the fact we do not have an explicit reward function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While the baseline dialog manager achieves the similar behavior by manually enforcing the order of questions, the system here automatically discovers the order of questions and the question set is much more richer than that in the baseline.5 However, we agree that simulation is not without its limitations and the results may not precisely reflect real scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the POMDP system does not have dialog failures on the development set as we tune the system in this way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was conducted during the first author's internship at Microsoft Research; thanks to Dan Bohus, Ghinwa Choueiter, Yun-Cheng Ju, Xiao Li, Milind Mahajan, Tim Paek, Yeyi Wang, and Dong Yu for helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A survey of POMDP solution techniques",
"authors": [
{
"first": "K",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Murphy. 2000. A survey of POMDP solution tech- niques. Technical Report, U. C. Berkeley.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Markov assumption in spoken dialogue management",
"authors": [
{
"first": "T",
"middle": [],
"last": "Paek",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chickering",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc of SIGdial",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Paek and D. Chickering. 2005. The Markov assump- tion in spoken dialogue management. In Proc of SIG- dial 2005.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spoken dialog management for robots",
"authors": [
{
"first": "N",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pineau",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thrun",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Roy, J. Pineau, and S. Thrun. 2000. Spoken dialog management for robots. In Proc of ACL 2000.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Perseus: randomized point-based value iteration for POMDPs",
"authors": [
{
"first": "M",
"middle": [],
"last": "Spaan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Vlassis",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Artificial Intelligence Research",
"volume": "24",
"issue": "",
"pages": "195--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Spaan and N. Vlassis. 2005. Perseus: randomized point-based value iteration for POMDPs. Journal of Artificial Intelligence Research, 24:195-220.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Applying POMDPs to Dialog Systems in the Troubleshooting Domain",
"authors": [
{
"first": "J",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc HLT/NAACL Workshop on Bridging the Gap: Academic and Industrial Research in Dialog Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Williams. 2007. Applying POMDPs to Dialog Systems in the Troubleshooting Domain. In Proc HLT/NAACL Workshop on Bridging the Gap: Aca- demic and Industrial Research in Dialog Technology.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Partially Observable Markov Decision Processes for Spoken Dialog Systems",
"authors": [
{
"first": "J",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2007,
"venue": "Computer Speech and Language",
"volume": "21",
"issue": "2",
"pages": "231--422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Williams and S. Young. 2007. Partially Observable Markov Decision Processes for Spoken Dialog Sys- tems. Computer Speech and Language 21(2): 231- 422.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Voice-Rate Dialog System for Consumer Ratings",
"authors": [
{
"first": "G",
"middle": [],
"last": "Zweig",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Y",
"middle": [
"C"
],
"last": "Ju",
"suffix": ""
},
{
"first": "Y",
"middle": [
"Y"
],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Zweig, P. Nguyen, Y.C. Ju, Y.Y. Wang, D. Yu, and A. Acero. 2007. The Voice-Rate Dialog System for Consumer Ratings. In Proc of Interspeech 2007.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Flow Chart of Voice-Rate System",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "Category e.g., choose category: Electronics, Movie, Book Question on Product name e.g., choose product name: Canon SD750 digital camera, Canon Powershot A40 digital camera, Canon SD950 digital camera, Others",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Component</td><td>Design</td><td>Comments</td></tr><tr><td>System State</td><td>(Product, User action)</td><td>e.g., (HP Computer, Category: computer)</td></tr><tr><td>Machine Action</td><td>Question on Question on Attribute</td><td>e.g., choose memory size: 64M, 128M, 256M</td></tr><tr><td/><td>Confirmation question</td><td>e.g., you want Canon SD750 camera, yes or no?</td></tr><tr><td/><td>Play Rating</td><td>e.g., I think you want Canon SD750 digital camera,</td></tr><tr><td/><td/><td>here is the rating!</td></tr><tr><td>User Action</td><td>Category</td><td>e.g., Movie</td></tr><tr><td/><td>Product name</td><td>e.g., Canon SD750 digital camera</td></tr><tr><td/><td>Attribute value</td><td>e.g., memory size: 64M</td></tr><tr><td/><td>Others</td><td>used when a question has too many possible options</td></tr><tr><td/><td>Yes/No</td><td>used for a confirmation question</td></tr><tr><td/><td>Not related</td><td>used if the intended product is unrelated to the question</td></tr><tr><td/><td>Not known</td><td>used if the user does not have required knowledge to</td></tr><tr><td/><td/><td>answer the question</td></tr></table>"
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"text": "that the human user and the speech",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Begin</td><td/></tr><tr><td>Initial Query</td><td/></tr><tr><td colspan=\"2\">Information Retrieval</td></tr><tr><td colspan=\"2\">List of Products</td></tr><tr><td colspan=\"2\">Corrupted User Action</td></tr><tr><td>Dialog Manager \u2022 Baseline \u2022 POMDP</td><td>Simulated Speech Recognizer User Action</td></tr><tr><td/><td>Simulated User</td></tr><tr><td>Found</td><td>Question</td></tr><tr><td>product?</td><td>No</td></tr><tr><td>Yes Play Rating</td><td>\u2022 Intended product \u2022 User knowledge model</td></tr><tr><td>End</td><td/></tr></table>"
},
"TABREF5": {
"text": "Information Retrieval Recall Rates on Test set",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
},
"TABREF7": {
"text": "Interaction Time Results on Test Set",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td>Size</td><td>Baseline (%)</td><td colspan=\"3\">POMDP (%) Total Reason-1 Reason-2</td></tr><tr><td>50</td><td>13.8</td><td>8.2</td><td>4.2</td><td>4.0</td></tr><tr><td>100</td><td>17.7</td><td>2.7</td><td>1.2</td><td>1.5</td></tr><tr><td>150</td><td>19.3</td><td>4.7</td><td>0.7</td><td>4.0</td></tr></table>"
},
"TABREF8": {
"text": "Dialog Failure Rate on Test Set",
"num": null,
"type_str": "table",
"html": null,
"content": "<table/>"
}
}
}
}