|
{ |
|
"paper_id": "U05-1032", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:08:23.628572Z" |
|
}, |
|
"title": "Combining Confidence Scores with Contextual Features for Robust Multi-Device Dialogue *", |
|
"authors": [ |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Cavedon", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Victoria Research Lab and CS&IT", |
|
"institution": "RMIT University Melbourne VIC", |
|
"location": { |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Matthew", |
|
"middle": [], |
|
"last": "Purver", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University Cordura Hall", |
|
"location": { |
|
"addrLine": "210 Panama St. Stanford", |
|
"postCode": "94305", |
|
"region": "CA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Florin", |
|
"middle": [], |
|
"last": "Ratiu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Stanford University Cordura Hall", |
|
"location": { |
|
"addrLine": "210 Panama St. Stanford", |
|
"postCode": "94305", |
|
"region": "CA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We present an approach to multi-device dialogue that evaluates and selects amongst candidate dialogue moves based on features at multiple levels. Multiple sources of information can be combined, multiple speech recognition and parsing hypotheses tested, and multiple devices and moves considered to choose the highest scoring hypothesis overall. The approach has the added benefit of potentially reordering n-best lists of inputs, effectively correcting errors in speech recognition or parsing. A current application includes conversational interaction with a collection of in-car devices.", |
|
"pdf_parse": { |
|
"paper_id": "U05-1032", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We present an approach to multi-device dialogue that evaluates and selects amongst candidate dialogue moves based on features at multiple levels. Multiple sources of information can be combined, multiple speech recognition and parsing hypotheses tested, and multiple devices and moves considered to choose the highest scoring hypothesis overall. The approach has the added benefit of potentially reordering n-best lists of inputs, effectively correcting errors in speech recognition or parsing. A current application includes conversational interaction with a collection of in-car devices.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper, we describe recent enhancements to the CSLI Dialogue Manager (CDM) infrastructure to increase robustness, in particular in (but not exclusive to) multi-device settings. Dialogue contributions may be processed using multiple information sources (e.g. deep syntactic parsing and shallow topic classification), scored at multiple levels (e.g. acoustic, semantic and context-based), and bid for by multiple agents, with the overall highest-confidence bid chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The CDM provides a multi-device infrastructure, with customization to new applications and addition of plug-and-play devices eased by a declarative dialogue-move scripting language (Mirkovic and Cavedon, 2005) . However, deciding which device an utterance is directed at is not always straightforward. One of our current application areas is a conversational interface to in-car devices, including entertainment, restaurant recommendation, navigation and telematic systems (Weng et al., 2004) ; in such an environment, a request such as \"Play X\" might be directed at an MP3 player or a DVD player. Eye-gaze (useful in multi-human dialogue) is not available, and we cannot rely on explicit device naming. One option is to use the resolution of NP arguments as disambiguating information (in our \"Play X\" example, whether X is a song or a movie). However, the NPresolution process itself is often device-specific (see below), preventing NPs from being properly resolved until device has been determined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 209, |
|
"text": "(Mirkovic and Cavedon, 2005)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 492, |
|
"text": "(Weng et al., 2004)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our proposed solution, inspired by approaches to multi-agent task allocation such as Contract Net (Smith, 1980) , is to allow all devices to perform shallow processing of the incoming utterance, each producing multiple possible candidate dialogue moves. Potential device-move combinations are then scored against a number of features, including speechrecognition and parse confidence, discourse context, current device-under-discussion, and NP argument analysis. The device associated with the highest-scoring dialogue move is given first option to process the utterance. A disambiguation question may be generated if no device is a clear winner, or a confirmation question if the winning bid is not scored high enough.", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 111, |
|
"text": "(Smith, 1980)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Device choice, move choice, and selection of best ASR/parser hypothesis are thereby made simultaneously, rather than being treated as independent processes. As well as allowing for principled device identification, this has the benefit of scoring hypotheses on the basis of multiple information sources, including context. The highest scoring result overall may not correspond to the highest-confidence result from the ASR or parser n-best list alone, but n-best lists are effectively re-ordered based on device and dialogue context, allowing parsing errors such as incorrect PP-attachment to be automatically corrected. Confirmation and clarification behaviour can also be governed not only by ASR or parse confidence, but by the overall score. Rayner et al. (1994) combine speech recognition confidence scores with various intra-utterance linguistic features to re-order n-best hypotheses; Chotimongkol and Rudnicky (2001) also include move bigram statistics. Walker et al. (2000) use similar feature combination to identify misrecognised utterances. More recently, Gabsdil and Lemon (2004) also include pragmatic information such as NP resolution, and simultaneously choose from an n-best list while identifying misrecognition. They also divide misrecognised utterances into two overall confidence ranges, one for outright rejection and one for confirmation/clarification. Similarly Gabsdil and Bos (2003) combine acoustic confidences with semantic information, and Schlangen (2004) with bridging reference resolution, in order to allow clarification on an integrated basis. All of these approaches assume a single-device setting and hence no ambiguity of move type once the correct word string or parse has been identified. Here we extend these approaches to allow a principled choice of move/device pairing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 746, |
|
"end": 766, |
|
"text": "Rayner et al. (1994)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 924, |
|
"text": "Chotimongkol and Rudnicky (2001)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 962, |
|
"end": 982, |
|
"text": "Walker et al. (2000)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1068, |
|
"end": 1092, |
|
"text": "Gabsdil and Lemon (2004)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1386, |
|
"end": 1408, |
|
"text": "Gabsdil and Bos (2003)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1469, |
|
"end": 1485, |
|
"text": "Schlangen (2004)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our focus is on activity-oriented dialogue, discussing tasks or activities that are jointly performed by a human and one or more intelligent devices or agents. By \"joint activity\", we mean that the human participates in specifying the activity, clarifying requests, interpreting observations, and otherwise supporting the agent in the performance of the activity. Systems engaging in such dialogue characteristically require deep knowledge about the task domain and the devices/agents they provide access to, in order to know what information is critical to the tasks, and know what information about task performance is appropriate to provide to the user. CSLI has been developing activityoriented dialogue systems for a number of years, for applications such as multimodal control of robotic devices (Lemon et al., 2002) , speechenabled tutoring systems (Fry et al., 2001) , and conversational interaction with in-car devices (Weng et al., 2004 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 802, |
|
"end": 822, |
|
"text": "(Lemon et al., 2002)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 856, |
|
"end": 874, |
|
"text": "(Fry et al., 2001)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 946, |
|
"text": "(Weng et al., 2004", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Manager Architecture", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The dialogue system architecture ( Figure 1 ) centers around the CSLI Dialogue Manager, which can be used with various different external components: speech-recognizer, NL parser, NL generation, speech-synthesizer, as well as connections to external application-specific components such as ontologies or knowledge bases, and the dialogue-enabled devices themselves. Clean interfaces and representationneutral processes enable the CDM to be used relatively seamlessly with different NL components, while interaction with external devices is mediated by Activity Models, declarative specifications of device capabilities and their relationships to linguistic processes.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 43, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dialogue Manager Architecture", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The CDM uses the information-state update (ISU) approach to dialogue management (Larsson and Traum, 2000) . The ISU approach extends the more traditional finite-state-based approaches used for simple dialogues (in which dialogue context is represented as one of a finite number of states, and each dialogue move results in a state transition), maintaining a richer representation of information-state. This includes the dialogue context as well as e.g. device and activity status, together with a set of update rules defining the effect of dialogue moves on the state (e.g. adding new information and referents for anaphora resolution, and triggering new tasks, activities and system responses). This approach allows more complex dialogue types with advanced strategies for context-dependent utterance interpretation (including fragments and revisions), NP resolution, issue tracking and improved speech-recognizer performance (Lemon and Gruenstein, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 105, |
|
"text": "(Larsson and Traum, 2000)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 927, |
|
"end": 955, |
|
"text": "(Lemon and Gruenstein, 2004)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Manager Architecture", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Generic ISU toolkits (e.g. TrindiKit (Traum et al., 1999) , DIPPER ) provide general data structures for representing state and a language for specifying update rules, but the specific state and rules used are left to the individual application. The CDM is a specific implementation of an ISU dialoguemanagement system, providing data structures and processes for update specifically designed as suitable to activity-oriented dialogue, but adaptible to different applications and domains.", |
|
"cite_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 57, |
|
"text": "(Traum et al., 1999)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CSLI Dialogue Manager", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The two central components of the CDM information state are the Dialogue Move Tree (DMT) and the Activity Tree. The DMT represents the dialogue context and history, with each dialogue move represented as a node in the tree, and incoming moves interpreted in context by attachment to an appropriate open parent node (for example, WhAnswer moves attach to their corresponding WhQuestion nodes). This tree structure specifically supports multithreaded, multi-topic conversations (Lemon et The DMT also serves as context for interpreting fragments, multi-utterance constructs, and revisions, and provides discourse structure for tasks such as NP-resolution. In tandem, the Activity Tree manages the underlying activities, fully instantiating new activities via their Activity Models (e.g. resolving NP referents or spawning subdialogues to fill missing arguments), editing existing ones as a result of revisions or corrections, and monitoring their execution (possibly generating system moves notifying completion or failure).", |
|
"cite_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 485, |
|
"text": "(Lemon et", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CSLI Dialogue Manager", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Other data structures that are part of the information state include: the salience list (NPs and their referents for anaphora resolution); multimodal input buffers (semantic interpretations of GUI events); and the system agenda (potential system outputs scheduled by the dialogue manager). See (Lemon et al., 2002) for details.", |
|
"cite_spans": [ |
|
{ |
|
"start": 294, |
|
"end": 314, |
|
"text": "(Lemon et al., 2002)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The CSLI Dialogue Manager", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In early versions of the CDM, dialogue moves were coded completely programmatically (in Java). While libraries of general-purpose dialogue moves (e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Scripting", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Command, WhQuestion, etc.) were re-used wherever possible, customization to new domains generally required significant programming effort in defining both new dialogue moves and their effects, and processes such as reference resolution. More recently, Mirkovic and Cavedon (2005) 4. dialogue move-specific specification of output to be generated, for disambiguation or requests for required information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 279, |
|
"text": "Mirkovic and Cavedon (2005)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Scripting", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Listing 1 shows the skeleton of a sample dialogue-move script for a play Command move for an MP3 player. The specific syntax of the Input and Output fields can be ignored for now: they simply match the interfaces of the parser and generator respectively. Variables in the dialogue move script correspond to variables in the Activity Model (AM) for the corresponding device. The AM for the MP3 device contains \u00a7 \u00a4 User Command : play { // inherits from generic Command dialogue move Description \" play something \" Input { // templates for matching parser output // full parse match : ' ' play / start X '' 1.0 SYN { s ( features ( mood ( imperative )) , predicate (# play / vb |# start / vb ) , ? arglist ( obj : _playable -object ,? sbj :*)) } // full parse match : ' ' I want to play / hear X '' 1. (Mirkovic and Cavedon, 2005) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 800, |
|
"end": 828, |
|
"text": "(Mirkovic and Cavedon, 2005)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Scripting", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "The CDM has also been extended to multidevice dialogue, with the scripting approach allowing easy dynamic plug-and-play specification of new \"dialogue-enabled\" devices. Note that this does not constitute multi-party dialogue: interaction is still mediated by a single dialogue manager, between a user and a Device Manager with which devices register themselves. However, the plug-and-play requirement (necessitated by the in-car application (Weng et al., 2004) ) has resulted in important extensions to the dialogue management infrastructure. Mirkovic and Cavedon (2005) describe a framework for encapsulating devices with information required to \"dialogue-enable\" them. Each device has associated with it the following components:", |
|
"cite_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 460, |
|
"text": "(Weng et al., 2004)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 543, |
|
"end": 570, |
|
"text": "Mirkovic and Cavedon (2005)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "1. a set of dialogue-move scripts;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "2. an Activity Model describing any device functionality accessible by dialogue;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "3. a device-specific ontology and knowledge base (KB); 4. rules for device-specific NP-resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Any significantly different forms of interaction requiring device-specific dialogue management processes must still be specified as new Java classes (referred to as DM process extensions in Figure 1 ), but in general the above four components contain the device-specific information required for dialogue-enabling new devices.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 198, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Note that NP resolution rules are included in the device definition; while pronoun resolution tends to be domain-independent, resolving definite descriptions and demonstratives is often device-dependent, and resolving named referents often requires constructing appropriate queries to a device-specific knowledge-base.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "Devices can now be added dynamically to the DMT, registering themselves with the Device Manager and becoming associated with their own nodes to which new conversation threads can attach; \"current device\" becomes part of the information-state and interpreting incoming utterances is performed in this context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "In this context, device selection-determining which device an utterance is associated withbecomes a further complication: an utterance may (on the surface) be potentially applicable to multiple devices: e.g. \"Play X\" could be applicable to either an MP3 player or a DVD player. Our original proposal was to create a dialogue move consistent with each such device and then score its applicability based on other factors, e.g. ability to resolve the object reference (the MP3 player would resolve a songname, the DVD player a movie name). The rest of the paper generalises this approach to a wider range of possible disambiguities, involving a greater number of scoring features, and results in more interesting behaviours than simple device-disambiguation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-Device Dialogue", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "The first new extension to the CDM described here is the use of multiple information sources in parallel to classify dialogue move type and produce an activity-specific representation. In most systems (and previous incarnations of the CDM) a single interpretation mechanism is chosen which is best suited to the application at hand, be it e.g. an open-domain statistical parser, a domain-specific constraint-based grammar, or keyword-spotting techniques. We extend this approach here to allow arbitrary multiple interpretation mechanisms, each producing its own (independent) interpretation hypothesis and associated confidence. In the current application, we use both a statistical parser producing relatively deep dependency structures, and a shallow maximum-entropy-based topic classifier.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple Interpretation Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Dialogue move scripts, such as the one sketched in Listing 1, are used to construct instantiations of candidate dialogue moves for a device, based on incoming user utterances (and planned system outputs, although we focus on the former here). This is governed by the Input field for each move type, which specifies a set of patterns: when an utterance representation matches an Input pattern, a candidate node of the appropriate type can be created. As Listing 1 shows, patterns can now be defined in terms of interpretation method as well as the interpreted form itself: SYN patterns match the output of the statistical parser, TOPIC patterns match the output of the topic classifier, while AND patterns match combinations of the two. Further general pattern types are available (e.g. LF for semantic logical forms, STRING for surface string keyword-matching) but are not used in the current application.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple Interpretation Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Each pattern is associated with a weight, used in the overall move scoring function described in Section 4 below. This allows moves cre-ated from matches against deep structure to be scored highly (e.g. SYN patterns in which predicate and arguments are specified and matched against), shallow matches to be scored low (e.g. simple TOPIC matches), and combined matches to have intermediate scores (e.g. a combination of an appropriate TOPIC classification with a SYN parser output containing a suitable NP argument pattern). Depending on other elements of the scoring function (e.g. the ASR confidence associated with the hypothesised string being tested) and on competing move hypotheses, low scores may lead to clarification being required (and therefore clarification will be more likely when only low-scoring (shallow) patterns are matched). Behaviour can therefore be made more robust: when deep parsing fails, a shallow hypothesis can be used instead (clarifying/confirming this specific hypothesis as necessary depending on its confidence) rather than resorting to a rejection or general clarification. Scores are currently set manually and determined by testing on sample dialogues; future work will examine learning them from data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple Interpretation Methods", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In the general case, multiple possible candidate dialogue moves will be produced for a given user utterance, for a number of reasons:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "1. multiple hypotheses from ASR/parser output;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "2. multiple interpretation methods (deep parsing vs. shallow classification);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "3. multiple possible move types for a candidate interpretation;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "4. multiple antecedent nodes (active dialogue threads), including multiple devices, for a particular move type.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "These are not independent: it is important to consider all factors simultaneously, to allow an integrated scoring function for each candidate and thus consider the best overall. The skeleton algorithm for instantiating and selecting a dialogue move is therefore as follows: The interesting aspect of the above process is the scoring function. Dialogue-move candidates are scored using a number of weighted features, ranging from speech-recognizer confidence, through to pragmatic features such as the \"device in focus\" and recency of the DMT node the candidate would attach to. The full list of features currently considered is shown in Table 1 . Note the inclusion of features at many levels, from acoustic recognition confidences through syntactic parse confidence to semantic and pragmatic features.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 637, |
|
"end": 644, |
|
"text": "Table 1", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dialogue Move Selection", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This integrated scoring mechanism therefore allows n-best list input to be re-ordered: dialoguemove candidates are potentially instantiated for each n-best list entry and the highest-scoring candidate chosen. While the n-best list rank and confidence are factors in the overall score, other features may outweigh them, resulting in an initially lower-ranked n-best entry becoming the highest-scoring dialogue move.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reordering n-best candidates", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Evaluation so far has been limited to initial testing on a manually constructed set of test inputs, using only a subset of the features: those shown italicised in Table 1 are not currently available due to either implementational issues (for full domain referent resolution and KB queries) or lack of domain data (for move bigram frequencies). Our test set includes 400 sentences, of which 300 have been used in training the statistical parser and 100 are unseen variations; it currently covers only utterances related to a single device (a restaurant recommendation system) and does not include speech recognition hypotheses (we are therefore testing parse n-best reordering only). We are currently working towards evaluation on a full set of features, with user-generated multi-device speech input.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 170, |
|
"text": "Table 1", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reordering n-best candidates", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "However, even with the restricted set of features, preliminary testing on this set shows encouraging results: the percentage of sentences for which the correct parse is chosen increases from 90% to 94%, a 41% reduction in error with several common parse errors being corrected. One example is incorrect PPattachment (a notoriously difficult challenge for statistical parsers). The example below (from a restaurant recommendation scenario), shows the top two n-best list entries for a sentence as produced by our statistical parser: \u00a7 \u00a4 Here, the second is lower-ranked but correct, taking both PPs as modifying restaurant, while the first treats only one as modifying restaurant, one as a sentential modifier. As the second allows two database-query constraints to be filled (city and street name), and the first just one, this boosts its overall score enough to overcome its lower parse confidence, and it is selected and used in DMT attachment. Similar improvements are gained with nominal modifiers: \u00a7 \u00a4 Here the second is correct, treating cheap and chinese as both independently modifying restaurant; the first takes cheap as modifying chinese, and the third takes cheap chinese as a single multi-word unit. Again, as the second fills two database-query constraints (price level and cuisine type), its overall score becomes highest. Evaluation of the improvement achieved is currently in progress.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Reordering n-best candidates", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The scoring function for feature combination is currently manually defined. When comparing between candidate moves of the same type, this is relatively straightforward, although hardly trivial and inherently done to a high extent by subjective expertise. However, it becomes much less straightforward when comparing candidates of different types, as some move types and some DMT attachment contexts will allow only a subset of the features to have meaningful values. However, comparison between move types is essential, as two ASR hypotheses with Recognition features: recognition and parse probabilities; recognition and parse n-best ranks; Semantic features: topic classification for the parse (with score); for dialogue moves spawning activities: -number of slots filled by input pattern; -number of resolved/unresolved slots after NP resolution; -number of ambiguously resolved slots after NP resolution; for queries about database objects:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Move type comparison", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "-set of constraints sent to the knowledge base; -cardinality of the set of knowledge base query results; Contextual features: current most active node; current activity; position and recency of the parent node in the active node list; bi-gram frequencies of the dialogue moves:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Move type comparison", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "-DMT attachments -pairs of child and parent node types; -pairs of chronologically consecutive user nodes. We are therefore currently investigating the use of machine learning techniques to improve on our current manual definitions. With annotated data the optimal weights of a scoring function that combines all the features can be automatically learned (see (Gabsdil and Lemon, 2004) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 384, |
|
"text": "(Gabsdil and Lemon, 2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Move type comparison", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "In order for a winning bid to be unambiguously accepted, its score must exceed the next highest score by more than a predefined threshold. If not, we take the choice of winning bid to be within our margin of error, and the dialogue manager asks a disambiguating clarification question. For example, if the pair of sentences in the previous section result in hypothesis dialogue moves with scores within the margin of error, then the dialogue manager generates a question of the form: \"Did you want to play a rock song by Cher or did you ask about rock songs?\" Alternatively, in some cases there may be a clear highest-scoring bid (i.e. one of high relative confidence) which is itself of low absolute confidence. In such cases, rather than act on the move unconditionally we ask the user for clarification. If the score is below a certain confidence threshold T 1 we treat the highest bid as a reasonable hypothesis, but ask for confirmation of the intended move; following the previous example, this would result in a question such as: \"I'm not sure I understood that. Did you want to play a rock song by Cher?\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue-move disambiguation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "If the score is below a second critical minimum threshold T 2 we take this as a failure in interpretation, and prompt for general clarification. As even the best hypothesised move is likely to be incorrect in this case (being of such low confidence), asking for specific confirmation is likely to be counter-productive or annoying (see e.g. (San-Segundo et al., 2001) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 341, |
|
"end": 367, |
|
"text": "(San-Segundo et al., 2001)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue-move disambiguation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Threshold values are currently specified as part of dialogue-move definitions; a future direction is to automatically learn optimal values for the thresholds.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dialogue-move disambiguation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We have described a number of strategies implemented in the CSLI Dialogue Manager to more robustly handle ambiguous or misunderstood utterances, and low-confidence interpretations. Features from multiple sources of evidence are combined to rate the possible dialogue move candidates as interpretations of a user utterance. Features include confidence scores from ASR and parser, as well as semantic and pragmatic criteria, and measures related to the dialogue context itself. As well as selecting dialogue move, in our multi-device setting the approach has the benefit of selecting the device being addressed. Although we have not yet performed a full evaluation of the ef-ficacy of this approach, we have observed several examples of the n-best list of inputs being (correctly) re-ordered-i.e. after misclassification by the statistical parser, the candidate dialogue-move corresponding to the correct (though lower-confidence) parse can still be selected. We are currently gathering data in order to provide a concrete evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Confidence thresholds (upper and lower bounds) set by the dialogue designer specify the levels at which a candidate move is rejected, requires explicit confirmation by the user, or simply accepted. Future work includes automatically learning optimal values for these thresholds and optimal weights on the features for scoring candidate dialogue-moves, applying the techniques of e.g. Gabsdil and Lemon (2004) to our multi-device setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 408, |
|
"text": "Gabsdil and Lemon (2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion and Conclusions", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Note that we will not create O \u00d7 N \u00d7 M candidates: only a subset of script entries (if any) will match for each node and n-best entry.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "DIPPER: Description and formalization of an information-state update dialogue system architecture", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Oka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 4th SIGdial Workshop on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Bos, E. Klein, O. Lemon, and T. Oka. 2003. DIPPER: Description and formalization of an information-state update dialogue system ar- chitecture. In Proceedings of the 4th SIGdial Workshop on Discourse and Dialogue.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nbest speech hypotheses reordering using linear regression", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Chotimongkol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Rudnicky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the 7th European Conference on Speech Communication and Technology (EUROSPEECH)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Chotimongkol and A. Rudnicky. 2001. N- best speech hypotheses reordering using lin- ear regression. In Proceedings of the 7th Eu- ropean Conference on Speech Communication and Technology (EUROSPEECH).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automated tutoring dialogues for training in shipboard damage control", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Fry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ginzton", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Pon-Barry", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. 2nd SIGdial Workshop on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. Fry, M. Ginzton, S. Peters, B. Clark, and H. Pon-Barry. 2001. Automated tutoring di- alogues for training in shipboard damage con- trol. In Proc. 2nd SIGdial Workshop on Dis- course and Dialogue.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Combining acoustic confidence scores with deep semantic analysis for clarification dialogues", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gabsdil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proc. 5th International Workshop on Computational Semantics (IWCS-5)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Gabsdil and J. Bos. 2003. Combining acous- tic confidence scores with deep semantic anal- ysis for clarification dialogues. In Proc. 5th International Workshop on Computational Semantics (IWCS-5).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Combining acoustic and pragmatic features to predict recognition performance in spoken dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Gabsdil", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. 42nd Annual Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Gabsdil and O. Lemon. 2004. Combining acoustic and pragmatic features to predict recognition performance in spoken dialogue systems. In Proc. 42nd Annual Meeting of the ACL.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Information state and dialogue management in the TRINDI dialogue move engine toolkit", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Larsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Natural Language Engineering", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Larsson and D. Traum. 2000. Informa- tion state and dialogue management in the TRINDI dialogue move engine toolkit. Natu- ral Language Engineering, 6.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Multithreaded context for robust conversational interfaces: Context-sensitive speech recognition and interpretation of corrective fragments", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gruenstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O. Lemon and A. Gruenstein. 2004. Multi- threaded context for robust conversational in- terfaces: Context-sensitive speech recognition and interpretation of corrective fragments.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Collaborative activities and multi-tasking in dialogue systems", |
|
"authors": [ |
|
{ |
|
"first": "O", |
|
"middle": [], |
|
"last": "Lemon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Gruenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Traitement Automatique des Langues", |
|
"volume": "", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "O. Lemon, A. Gruenstein, and S. Peters. 2002. Collaborative activities and multi-tasking in dialogue systems. Traitement Automatique des Langues, 43(2).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Practical plug-and-play dialogue management", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mirkovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Cavedon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the Annual Meeting of the Pacific Association of Computational Linguistics (PACLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Mirkovic and L. Cavedon. 2005. Practical plug-and-play dialogue management. In Pro- ceedings of the Annual Meeting of the Pa- cific Association of Computational Linguis- tics (PACLING).", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Combining knowledge sources to reorder n-best speech hypothesis lists", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Rayner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Carter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Digalakis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Price", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the ARPA Human Language Technology Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Rayner, D. Carter, V. Digalakis, and P. Price. 1994. Combining knowledge sources to reorder n-best speech hypothesis lists. In Proceedings of the ARPA Human Language Technology Workshop.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Designing confirmation mechanisms and error recover techniques in a railway information system for spanish", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "San-Segundo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Montero", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Ferreiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "C\u00f3rdoba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Pardo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. 2nd SIGdial Workshop on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. San-Segundo, J. M. Montero, J. Ferreiros, R. C\u00f3rdoba, and J. M. Pardo. 2001. Design- ing confirmation mechanisms and error re- cover techniques in a railway information sys- tem for spanish. In Proc. 2nd SIGdial Work- shop on Discourse and Dialogue.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Causes and strategies for requesting clarification in dialogue", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Schlangen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. 5th SIGdial Workshop on Discourse and Dialogue", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Schlangen. 2004. Causes and strategies for requesting clarification in dialogue. In Proc. 5th SIGdial Workshop on Discourse and Di- alogue.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "The contract net protocol: High level communication and control in a distributed problem solver", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "IEEE Transactions on Computers, C", |
|
"volume": "29", |
|
"issue": "12", |
|
"pages": "1104--1113", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R. G. Smith. 1980. The contract net protocol: High level communication and control in a distributed problem solver. IEEE Transac- tions on Computers, C-29(12):1104-1113.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A model of dialogue moves and information state revision", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Traum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Bos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Cooper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Larsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Lewin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Matheson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Poesio", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Task Oriented Instructional Dialogue (TRINDI): Deliverable 2.1. University of", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "D. Traum, J. Bos, R. Cooper, S. Larsson, I. Lewin, C. Matheson, and M. Poesio. 1999. A model of dialogue moves and information state revision. In Task Oriented Instructional Dialogue (TRINDI): Deliverable 2.1. Univer- sity of Gothenburg.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Using natural language processing and discourse features to identify understanding errors in a spoken dialogue system", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Walker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Wright", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Langkilde", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 17th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Walker, J. Wright, and I. Langkilde. 2000. Using natural language processing and dis- course features to identify understanding er- rors in a spoken dialogue system. In Proceed- ings of the 17th International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A conversational dialogue system for cognitively overloaded users", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Weng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Cavedon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Raghunathan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Mirkovic", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Cheng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Bratt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Upson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Shriberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bergmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proc. 8th International Conference on Spoken Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Weng, L. Cavedon, B. Raghunathan, D. Mirkovic, H. Cheng, H. Schmidt, H. Bratt, R. Mishra, S. Peters, L. Zhao, S. Upson, E. Shriberg, and C. Bergmann. 2004. A conversational dialogue system for cognitively overloaded users. In Proc. 8th International Conference on Spoken Language Processing (INTERSPEECH).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Dialogue System Architecture al., 2002), with branches representing topics or threads: a dialogue move that cannot attach itself to the most recent active node may instead attach to another open branch (corresponding to a resumed conversation) or open a new branch (a new conversation thread) by attaching itself to the root node." |
|
}, |
|
"TABREF5": { |
|
"num": null, |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>: Move Scoring Features</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |